Goto

Collaborating Authors

 human actor



Learning to Control an Android Robot Head for Facial Animation

Heisler, Marcel, Becker-Asano, Christian

arXiv.org Artificial Intelligence

The ability to display rich facial expressions is crucial for human-like robotic heads. While manually defining such expressions is intricate, there already exist approaches to automatically learn them. In this work one such approach is applied to evaluate and control a robot head different from the one in the original study. To improve the mapping of facial expressions from human actors onto a robot head, it is proposed to use 3D landmarks and their pairwise distances as input to the learning algorithm instead of the previously used facial action units. Participants of an online survey preferred mappings from our proposed approach in most cases, though there are still further improvements required.


Act Natural! Projecting Autonomous System Trajectories Into Naturalistic Behavior Sets

Khan, Hamzah I., Thorpe, Adam J., Fridovich-Keil, David

arXiv.org Artificial Intelligence

Autonomous agents operating around human actors must consider how their behaviors might affect those humans, even when not directly interacting with them. To this end, it is often beneficial to be predictable and appear naturalistic. Existing methods to address this problem use human actor intent modeling or imitation learning techniques, but these approaches rarely capture all possible motivations for human behavior or require significant amounts of data. In contrast, we propose a technique for modeling naturalistic behavior as a set of convex hulls computed over a relatively small dataset of human behavior. Given this set, we design an optimization-based filter which projects arbitrary trajectories into it to make them more naturalistic for autonomous agents to execute while also satisfying dynamics constraints. We demonstrate our methods on real-world human driving data from the inD intersection dataset (Bock et al., 2020).


MITFAS: Mutual Information based Temporal Feature Alignment and Sampling for Aerial Video Action Recognition

Xian, Ruiqi, Wang, Xijun, Manocha, Dinesh

arXiv.org Artificial Intelligence

We present a novel approach for action recognition in UAV videos. Our formulation is designed to handle occlusion and viewpoint changes caused by the movement of a UAV. We use the concept of mutual information to compute and align the regions corresponding to human action or motion in the temporal domain. This enables our recognition model to learn from the key features associated with the motion. We also propose a novel frame sampling method that uses joint mutual information to acquire the most informative frame sequence in UAV videos. We have integrated our approach with X3D and evaluated the performance on multiple datasets. In practice, we achieve 18.9% improvement in Top-1 accuracy over current state-of-the-art methods on UAV-Human(Li et al., 2021), 7.3% improvement on Drone-Action(Perera et al., 2019), and 7.16% improvement on NEC Drones(Choi et al., 2020).


'Here is the news. You can't stop us': AI anchor Zae-In grants us an interview

The Guardian

Like most newsreaders, Zae-In wears a microphone pinned to her collar and clutches a stack of notes – but unlike most, her face is entirely fake. A "virtual human" designed by South Korean artificial intelligence company Pulse9, Zae-In spent five months this year reading live news bulletins on national broadcaster SBS. That, you might think, is it then. To adapt the words of another animated newscaster: "I, for one, welcome our new AI overlords." The world belongs to the artificially intelligent and the News at Ten will never be the same again.

  Country:
  Industry: Media > News (0.71)

AZTR: Aerial Video Action Recognition with Auto Zoom and Temporal Reasoning

Wang, Xijun, Xian, Ruiqi, Guan, Tianrui, de Melo, Celso M., Nogar, Stephen M., Bera, Aniket, Manocha, Dinesh

arXiv.org Artificial Intelligence

We propose a novel approach for aerial video action recognition. Our method is designed for videos captured using UAVs and can run on edge or mobile devices. We present a learning-based approach that uses customized auto zoom to automatically identify the human target and scale it appropriately. This makes it easier to extract the key features and reduces the computational overhead. We also present an efficient temporal reasoning algorithm to capture the action information along the spatial and temporal domains within a controllable computational cost. Our approach has been implemented and evaluated both on the desktop with high-end GPUs and on the low power Robotics RB5 Platform for robots and drones. In practice, we achieve 6.1-7.4% improvement over SOTA in Top-1 accuracy on the RoCoG-v2 dataset, 8.3-10.4% improvement on the UAV-Human dataset and 3.2% improvement on the Drone Action dataset.


Vulnerabilities of Humans and Machines are Important

#artificialintelligence

How much we trust artificial intelligence (AI) systems influences whether we rely on them and how we use them. Exploring the influencing factors on trust is attracting a lot of interest, while vulnerability in human-machine interaction doesn't really get much attention. Yet vulnerability is at the heart of theories of trust in human interactions. Vulnerability, however, is not at the heart of theories of trust in human-machine interaction. In humans, vulnerability is taking some risk of being disappointed by the other person in an interaction: They make themselves vulnerable to others.


Enhancing Operational Excellence with Augmented Business Process Management

#artificialintelligence

Recent years have brought a stream of exciting developments in the field of Business Process Management (BPM). We have seen a breathtaking uptake of business process automation technology, such as Robotic Process Automation (RPA). We have witnessed the rise of process mining, and promising evolutions in the areas of predictive process analytics and digital process twins. In the eyes of a business analyst, each of these technologies offers compelling opportunities to enhance operational excellence. However, if we look at these technologies in isolation, it is easy to miss the bigger picture and the wider space of opportunities that these technologies open when used jointly rather than applied in individual projects or silos.


In the age of deepfakes, could virtual actors put humans out of business?

The Guardian

When you're watching a modern blockbuster such as The Avengers, it's hard to escape the feeling that what you're seeing is almost entirely computer-generated imagery, from the effects to the sets to fantastical creatures. But if there's one thing you can rely on to be 100% real, it's the actors. We might have virtual pop stars like Hatsune Miku, but there has never been a world-famous virtual film star. Even that link with corporeal reality, though, is no longer absolute. You may have already seen examples of what's possible: Peter Cushing (or his image) appearing in Rogue One: A Star Wars Story more than 20 years after his death, or Tupac Shakur performing from beyond the grave at Coachella in 2012.


This wild, AI-generated film is the next step in "whole-movie puppetry"

#artificialintelligence

Two years ago, Ars Technica hosted the online premiere of a weird short film called Sunspring, which was mostly remarkable because its entire script was created by an AI. The film's human cast laughed at odd, computer-generated dialogue and stage direction before performing the results in particularly earnest fashion. That film's production duo, Director Oscar Sharp and AI researcher Ross Goodwin, have returned with another AI-driven experiment that, on its face, looks decidedly worse. Blurry faces, computer-generated dialogue, and awkward scene changes fill out this year's Zone Out, a film created as an entry in the Sci-Fi-London 48-Hour Challenge--meaning, just like last time, it had to be produced in 48 hours and adhere to certain specific prompts. That 48-hour limit is worth minding, because Sharp and Goodwin went one bigger this time: they let their AI system, which they call Benjamin, handle the film's entire production pipeline.