Goto

Collaborating Authors

Talespin's virtual human platform uses VR and AI to teach employees soft skills

#artificialintelligence

Training employees how to perform specific tasks isn't difficult, but building their soft skills -- their interactions with management, fellow employees, and customers -- can be more challenging, particularly if there aren't people around to practice with. Virtual reality training company Talespin announced today that it is leveraging AI to tackle that challenge, using a new "virtual human platform" to create realistic simulations for employee training purposes. Unlike traditional employee training, which might consist of passively watching a video or lightly interacting with collections of canned multiple choice questions, Talespin's system has a trainee interact with a virtual human powered by AI, speech recognition, and natural language processing. Because the interactions use VR headsets and controllers, the hardware can track a trainee's gaze, body movement, and facial expressions during the session. Talespin's virtual character is able to converse realistically, guiding trainees through branching narratives using natural mannerisms and believable speech.


How Facebook's brain-machine interface measures up

#artificialintelligence

Somewhat unceremoniously, Facebook this week provided an update on its brain-computer interface project, preliminary plans for which it unveiled at its F8 developer conference in 2017. In a paper published in the journal Nature Communications, a team of scientists at the University of California, San Francisco backed by Facebook Reality Labs -- Facebook's Pittsburgh-based division devoted to augmented reality and virtual reality R&D -- described a prototypical system capable of reading and decoding study subjects' brain activity while they speak. It's impressive no matter how you slice it: The researchers managed to make out full, spoken words and phrases in real time. Study participants (who were prepping for epilepsy surgery) had a patch of electrodes placed on the surface of their brains, which employed a technique called electrocorticography (ECoG) -- the direct recording of electrical potentials associated with activity from the cerebral cortex -- to derive rich insights. A set of machine learning algorithms equipped with phonological speech models learned to decode specific speech sounds from the data and to distinguish between questions and responses.


A method to introduce emotion recognition in gaming

#artificialintelligence

Virtual Reality (VR) is opening up exciting new frontiers in the development of video games, paving the way for increasingly realistic, interactive and immersive gaming experiences. VR consoles, in fact, allow gamers to feel like they are almost inside the game, overcoming limitations associated with display resolution and latency issues. An interesting further integration for VR would be emotion recognition, as this could enable the development of games that respond to a user's emotions in real time. With this in mind, a team of researchers at Yonsei University and Motion Device Inc. have recently proposed a deep-learning-based technique that could enable emotion recognition during VR gaming experiences. Their paper was presented at the 2019 IEEE Conference on Virtual Reality and 3-D User Interfaces.


Watch artificial intelligence create a 3D model of a person--from just a few seconds of video

#artificialintelligence

Transporting yourself into a video game, body and all, just got easier. Artificial intelligence has been used to create 3D models of people's bodies for virtual reality avatars, surveillance, visualizing fashion, or movies. But it typically requires special camera equipment to detect depth or to view someone from multiple angles. A new algorithm creates 3D models using standard video footage from one angle. The system has three stages.