Goto

Collaborating Authors

Results


Holograms on the Horizon?

Communications of the ACM

Researchers at the Massachusetts Institute of Technology (MIT) have used machine learning to reduce the processing power needed to render convincing holographic images, making it possible to generate them in near-real time on consumer-level computer hardware. Such a method could pave the way to portable virtual-reality systems that use holography instead of stereoscopic displays. Stereo imagery can present the illusion of three-dimensionality, but users often complain of dizziness and fatigue after long periods of use because there is a mismatch between where the brain expects to focus and the flat focal plane of the two images. Switching to holographic image generation overcomes this problem; it uses interference in the patterns of many light beams to construct visible shapes in free space that present the brain with images it can more readily accept as three-dimensional (3D) objects. "Holography in its extreme version produces a full optical reproduction of the image of the object. There should be no difference between the image of the object and the object itself," says Tim Wilkinson, a professor of electrical engineering at Jesus College of the U.K.'s University of Cambridge.


Researchers Take Steps Towards Autonomous AI-Powered Exoskeleton Legs

#artificialintelligence

University of Waterloo researchers are using deep learning and computer vision to develop autonomous exoskeleton legs to help users walk, climb stairs, and avoid obstacles. The ExoNet project, described in an early-access paper on "Frontiers in Robotics and AI", fits users with wearable cameras. AI software processes the camera's video stream, and is being trained to recognize surrounding features such as stairs and doorways, and then determine the best movements to take. "Our control approach wouldn't necessarily require human thought," said Brokoslaw Laschowski, Ph.D. candidate in systems design engineering and lead author on the ExoNet project. "Similar to autonomous cars that drive themselves, we're designing autonomous exoskeletons that walk for themselves."


Global $384 Bn Smart Manufacturing Market 2020-2025 by Enabling Technology (Condition Monitoring, Artificial Intelligence, IIoT, Digital Twin, Industrial 3D Printing)

#artificialintelligence

Increased Integration of Different Solutions to Provide Improved Performance 5.2.3.3 Rapid Industrial Growth in Emerging Economies 5.2.4 Challenges 5.2.4.1 Threats Related to Cybersecurity 5.2.4.2 Complexity in Implementation of Smart Manufacturing Technology Systems 5.2.4.3 Lack of Awareness About Benefits of Adopting Information and Enabling Technologies 5.2.4.4 Lack of Skilled Workforce 5.3 Industrial Wearable Devices Trends in Smart Manufacturing 5.3.1 By Device 5.3.1.1


Nvidia unveiled a new AI engine that renders virtual world's in real time – Fanatical Futurist by International Keynote Speaker Matthew Griffin

#artificialintelligence

Nvidia have announced that they've introduced a new Artificial Intelligence (AI) Deep Learning model that "aims to catapult the graphics industry into the AI Age," and the result is the first ever interactive AI rendered virtual world. In short, Nvidia now has an AI capable of rendering high definition virtual environments, that can be used to create Virtual Reality (VR) games and simulations, in real time, and that's big because it takes the effort and cost out of having to design and make them from scratch, which has all sorts of advantages. In order to work their magic the researchers used what they called a Conditional Generative Neural Network as a starting point and then trained a neural network to render new 3D environments, and now the breakthrough will allow developers and artists of all kinds to create new interactive 3D virtual worlds based on videos from the real world, dramatically lowering the cost and time it takes to create virtual worlds. "NVIDIA has been creating new ways to generate interactive graphics for 25 years – and this is the first time we can do this with a neural network," said the leader of the Nvidia researchers Bryan Catanzaro, Vice President of Applied Deep Learning at Nvidia. "Neural networks – specifically – generative models like these are going to change the way graphics are created."


Domain Adaptation for sEMG-based Gesture Recognition with Recurrent Neural Networks

arXiv.org Machine Learning

Abstract--Surface Electromyography (sEMG) is to record muscles' electrical activity from a restricted area of the skin by using electrodes. The sEMG-based gesture recognition is extremely sensitive of inter-session and inter-subject variances. We propose a model and a deep-learning-based domain adaptation method to approximate the domain shift for recognition accuracy enhancement. Experiments on sparse and High-Density (HD) sEMG datasets validate that our approach outperforms state-of-the-art methods. Traditionally, the control of a graphical user interface of a computer or the actions of a robot or drone is being done with hand or arm gestures interacting with a physical controller, like a mouse in case of traditional 2D screens. A touch sensor in case of touch screens can also be thought of as a physical controller. The wearable devices can keep the possibility to build a Human-Computer Interface (HCI), which gives an universal, natural and easy to use interaction with machines.