metahuman
Learning When to Ask: Simulation-Trained Humanoids for Mental-Health Diagnosis
Cenacchi, Filippo, Richards, Deborah, Cao, Longbing
Testing humanoid robots with users is slow, causes wear, and limits iteration and diversity. Yet screening agents must master conversational timing, prosody, backchannels, and what to attend to in faces and speech for Depression and PTSD. Most simulators omit policy learning with nonverbal dynamics; many controllers chase task accuracy while underweighting trust, pacing, and rapport. We virtualise the humanoid as a conversational agent to train without hardware burden. Our agent-centred, simulation-first pipeline turns interview data into 276 Unreal Engine MetaHuman patients with synchronised speech, gaze/face, and head-torso poses, plus PHQ-8 and PCL-C flows. A perception-fusion-policy loop decides what and when to speak, when to backchannel, and how to avoid interruptions, under a safety shield. Training uses counterfactual replay (bounded nonverbal perturbations) and an uncertainty-aware turn manager that probes to reduce diagnostic ambiguity. Results are simulation-only; the humanoid is the transfer target. In comparing three controllers, a custom TD3 (Twin Delayed DDPG) outperformed PPO and CEM, achieving near-ceiling coverage with steadier pace at comparable rewards. Decision-quality analyses show negligible turn overlap, aligned cut timing, fewer clarification prompts, and shorter waits. Performance stays stable under modality dropout and a renderer swap, and rankings hold on a held-out patient split. Contributions: (1) an agent-centred simulator that turns interviews into 276 interactive patients with bounded nonverbal counterfactuals; (2) a safe learning loop that treats timing and rapport as first-class control variables; (3) a comparative study (TD3 vs PPO/CEM) with clear gains in completeness and social timing; and (4) ablations and robustness analyses explaining the gains and enabling clinician-supervised humanoid pilots.
- Europe > Middle East > Cyprus > Pafos > Paphos (0.05)
- Oceania > Australia > New South Wales > Sydney (0.04)
Are We Generalizing from the Exception? An In-the-Wild Study on Group-Sensitive Conversation Design in Human-Agent Interactions
Müller, Ana, Jeschke, Sabina, Richert, Anja
Are We Generalizing from the Exception? Abstract -- This paper investigates the impact of a group-adaptive conversation design in two socially interactive agents (SIAs) through two real-world studies. Both SIAs - Furhat, a social robot, and MetaHuman, a virtual agent - were equipped with a conversational artificial intelligence (CAI) backend combining hybrid retrieval and generative models. The studies were carried out in an in-the-wild setting with a total of N = 188 participants who interacted with the SIAs - in dyads, triads or larger groups - at a German museum. Although the results did not reveal a significant effect of the group-sensitive conversation design on perceived satisfaction, the findings provide valuable insights into the challenges of adapting CAI for multi-party interactions and across different embodiments (robot vs. virtual agent) highlighting the need for multimodal strategies beyond linguistic pluralization. These insights contribute to the fields of Human-Agent Interaction (HAI), Human-Robot Interaction (HRI), and broader Human-Machine Interaction (HMI), providing insights for future research on effective dialogue adaptation in group settings. Conversational artificial intelligence (CAI) is the core technology that enables socially interactive agents (SIAs) to understand and generate human language. These agents - including social robots, chatbots, and virtual agents - rely on multimodal signals (e.g., text, speech) to engage in naturalistic interactions with humans [1].
- North America > United States (0.05)
- Europe > Sweden (0.04)
- Europe > Germany > North Rhine-Westphalia > Cologne Region > Cologne (0.04)
- Europe > Germany > Bavaria > Middle Franconia > Nuremberg (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (0.96)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.69)
Human-like Nonverbal Behavior with MetaHumans in Real-World Interaction Studies: An Architecture Using Generative Methods and Motion Capture
Chojnowski, Oliver, Eberhard, Alexander, Schiffmann, Michael, Müller, Ana, Richert, Anja
Socially interactive agents are gaining prominence in domains like healthcare, education, and service contexts, particularly virtual agents due to their inherent scalability. To facilitate authentic interactions, these systems require verbal and nonverbal communication through e.g., facial expressions and gestures. While natural language processing technologies have rapidly advanced, incorporating human-like nonverbal behavior into real-world interaction contexts is crucial for enhancing the success of communication, yet this area remains underexplored. One barrier is creating autonomous systems with sophisticated conversational abilities that integrate human-like nonverbal behavior. This paper presents a distributed architecture using Epic Games MetaHuman, combined with advanced conversational AI and camera-based user management, that supports methods like motion capture, handcrafted animation, and generative approaches for nonverbal behavior. We share insights into a system architecture designed to investigate nonverbal behavior in socially interactive agents, deployed in a three-week field study in the Deutsches Museum Bonn, showcasing its potential in realistic nonverbal behavior research.
- Europe > Germany > North Rhine-Westphalia > Cologne Region > Cologne (0.05)
- North America > United States > New York > New York County > New York City (0.05)
- North America > United States > Texas > Brazos County > College Station (0.04)
- (3 more...)
Hollywood Doesn't Have to Worry About A.I. Yet -- but Filmmakers Should Embrace It (Column)
Artificial intelligence has been a buzzword for futurists as long as computers have existed, but 2022 was the year the public started to dread its advancement. With the chatbot ChatGPT released to the public and generating complex answers to millions of prompts in seconds, many people in the business of storytelling have been worried about new competition. Hollywood screenwriters don't have to know how to save the cat if a computer can do it for them. This has been a year loaded with dramatic uncertainty for the industry, from the wild oscillations of the streaming market to the bombardment of doom-and-gloom prognoses for arthouse cinema. But these ephemeral dramas have nothing on the fear of encroaching A.I.
- North America > United States > California (0.14)
- North America > United States > New York (0.04)
- North America > United States > Arizona > Maricopa County > Phoenix (0.04)
- Media > Film (1.00)
- Leisure & Entertainment (1.00)
METAHUMAN coming to MENA
Founders of new era in Artificial Intelligence announced their METAHUMAN platform during GITEX 2022 in Dubai. DRIPS.TV is a Metahuman Artificial Intelligence platform that literally can replace human being at News channels as a beginning, speaking all languages in any shape and look. Matti K. from Finland and Mohammed Ebrahim Al Fardan from the Kingdom of Bahrain, have been working day and night during the pandemic to create the next unicorn. "It is METAHUMAN at last, speaking all languages on earth with almost perfect face and body impressions to deliver any broadcast, the future is now." Said Mohammed Al Fardan, founder and technology expert.
- Asia > Middle East > Bahrain (0.74)
- Europe > Middle East > Malta > Mediterranean Sea (0.40)
- Europe > Middle East > Cyprus > Mediterranean Sea (0.40)
- (20 more...)
MetaHuman: Epic Games' latest project creates realistic digital humans for video games
A new tool from the studio behind "Fortnite" will make it easier for developers to create photorealistic digital humans for projects including video games. Epic Games unveiled its MetaHuman Creator, a cloud-streamed app allowing developers to create a digital human complete with hair and clothing in a matter of minutes. Epic also owns Unreal Engine, a platform used to create some of the most popular video games as well as in other industries including automotive, film and TV. "Bringing compelling real-time digital humans to life is incredibly challenging and time-consuming," reads a description on the website for MetaHuman Creator. "It can take months of research, costly scanning equipment, and an army of tech artists. What if we could make the process radically simpler, faster, and more scalable – without compromising on quality?"
- Information Technology > Artificial Intelligence > Games (0.96)
- Information Technology > Communications > Social Media (0.61)