Goto

Collaborating Authors

Bringing Augmented Reality to life with 'virtual humans' using Artificial Intelligence – the mission of Scanta WRAL TechWire

#artificialintelligence

Editor's note: This is the latest installment in an Uptech series of video interviews and accompanying transcripts about the emerging development and uses of Artificial Intelligence along with Machine Learning, YourLocalStudio.com and WRAL TechWire are working together to publish this series. Alexander Ferguson is the founder and CEO of YourLocalStudio. Artificial intelligence, machine learning: These emerging technologies are changing the way we live, work, and do business in the world for the better. How is AI actually being applied in business today, though? In this episode of UpTech Report, I interview Chaitanya Hiremath, who also goes by Chad.


Watch artificial intelligence create a 3D model of a person--from just a few seconds of video

#artificialintelligence

Transporting yourself into a video game, body and all, just got easier. Artificial intelligence has been used to create 3D models of people's bodies for virtual reality avatars, surveillance, visualizing fashion, or movies. But it typically requires special camera equipment to detect depth or to view someone from multiple angles. A new algorithm creates 3D models using standard video footage from one angle. The system has three stages.



A method to introduce emotion recognition in gaming

#artificialintelligence

Virtual Reality (VR) is opening up exciting new frontiers in the development of video games, paving the way for increasingly realistic, interactive and immersive gaming experiences. VR consoles, in fact, allow gamers to feel like they are almost inside the game, overcoming limitations associated with display resolution and latency issues. An interesting further integration for VR would be emotion recognition, as this could enable the development of games that respond to a user's emotions in real time. With this in mind, a team of researchers at Yonsei University and Motion Device Inc. have recently proposed a deep-learning-based technique that could enable emotion recognition during VR gaming experiences. Their paper was presented at the 2019 IEEE Conference on Virtual Reality and 3-D User Interfaces.


How Facebook's brain-machine interface measures up

#artificialintelligence

Somewhat unceremoniously, Facebook this week provided an update on its brain-computer interface project, preliminary plans for which it unveiled at its F8 developer conference in 2017. In a paper published in the journal Nature Communications, a team of scientists at the University of California, San Francisco backed by Facebook Reality Labs -- Facebook's Pittsburgh-based division devoted to augmented reality and virtual reality R&D -- described a prototypical system capable of reading and decoding study subjects' brain activity while they speak. It's impressive no matter how you slice it: The researchers managed to make out full, spoken words and phrases in real time. Study participants (who were prepping for epilepsy surgery) had a patch of electrodes placed on the surface of their brains, which employed a technique called electrocorticography (ECoG) -- the direct recording of electrical potentials associated with activity from the cerebral cortex -- to derive rich insights. A set of machine learning algorithms equipped with phonological speech models learned to decode specific speech sounds from the data and to distinguish between questions and responses.