Collaborating Authors


A Book Review on AI 2041: Ten Visions for Our Future by Chen Qiufan and Kai-Fu Lee


We have always wondered how life will transform in this world with the implementation of AI. Award-winning authors Chen Quifan and Kai-Fu Lee have presented the world with their new book AI 2041: Ten Visions for Our Future with the ten most interesting and enlightening chapters on September 14, 2021. All these chapters include an analysis of major disruptive technologies that are thriving in the tech-driven market like deep learning, big data, NLP, AI education, AI healthcare, virtual reality, augmented reality, autonomous vehicles, quantum computers, and other issues. AI 2041 book presents a ground-breaking blend of imaginative storytelling as well as scientific forecasting on the basis of the development of the 21st century. It opens the minds of readers about the applications of artificial intelligence in multiple industries across the world.

How Do You Build a Better Machine? You Can Use Artificial Intelligence


As industrial machines are becoming more connected and flexible, the process of building and commissioning the machine is also getting smarter. Machines are built now using artificial intelligence, digital twins, and augmented reality. We caught up with Rahul Garg, VP of industrial machinery and mid-market program at Siemens Digital Industries Software. Garg explained the process of creating smart industrial machines using advanced technology. Design News: Is artificial intelligence becoming a major factor in building industrial machines?

Dynamic Difficulty Adjustment in Virtual Reality Exergames through Experience-driven Procedural Content Generation Artificial Intelligence

Virtual Reality (VR) games that feature physical activities have been shown to increase players' motivation to do physical exercise. However, for such exercises to have a positive healthcare effect, they have to be repeated several times a week. To maintain player motivation over longer periods of time, games often employ Dynamic Difficulty Adjustment (DDA) to adapt the game's challenge according to the player's capabilities. For exercise games, this is mostly done by tuning specific in-game parameters like the speed of objects. In this work, we propose to use experience-driven Procedural Content Generation for DDA in VR exercise games by procedurally generating levels that match the player's current capabilities. Not only finetuning specific parameters but creating completely new levels has the potential to decrease repetition over longer time periods and allows for the simultaneous adaptation of the cognitive and physical challenge of the exergame. As a proof-of-concept, we implement an initial prototype in which the player must traverse a maze that includes several exercise rooms, whereby the generation of the maze is realized by a neural network. Passing those exercise rooms requires the player to perform physical activities. To match the player's capabilities, we use Deep Reinforcement Learning to adjust the structure of the maze and to decide which exercise rooms to include in the maze. We evaluate our prototype in an exploratory user study utilizing both biodata and subjective questionnaires.

iGibson 2.0: Object-Centric Simulation for Robot Learning of Everyday Household Tasks Artificial Intelligence

Recent research in embodied AI has been boosted by the use of simulation environments to develop and train robot learning approaches. However, the use of simulation has skewed the attention to tasks that only require what robotics simulators can simulate: motion and physical contact. We present iGibson 2.0, an open-source simulation environment that supports the simulation of a more diverse set of household tasks through three key innovations. First, iGibson 2.0 supports object states, including temperature, wetness level, cleanliness level, and toggled and sliced states, necessary to cover a wider range of tasks. Second, iGibson 2.0 implements a set of predicate logic functions that map the simulator states to logic states like Cooked or Soaked. Additionally, given a logic state, iGibson 2.0 can sample valid physical states that satisfy it. This functionality can generate potentially infinite instances of tasks with minimal effort from the users. The sampling mechanism allows our scenes to be more densely populated with small objects in semantically meaningful locations. Third, iGibson 2.0 includes a virtual reality (VR) interface to immerse humans in its scenes to collect demonstrations. As a result, we can collect demonstrations from humans on these new types of tasks, and use them for imitation learning. We evaluate the new capabilities of iGibson 2.0 to enable robot learning of novel tasks, in the hope of demonstrating the potential of this new simulator to support new research in embodied AI. iGibson 2.0 and its new dataset will be publicly available at

AI and VR Transform Thoughts to Action with Wireless BCI


The aim of brain-computer interfaces (BCIs), also called brain-machine interfaces (BMIs), is to improve the quality of life and restore capabilities to those who are physically disabled. Last week, researchers at the Georgia Institute of Technology and their global collaborators published a new study in Advanced Science that shows a wireless brain-computer interface that uses virtual reality (VR) and artificial intelligence (AI) deep learning to convert brain imagery into actions. The brain-computer interface industry is expected to reach USD 3.7 billion by 2027 with a compound annual growth rate of 15.5 percent during 2020-2027 according to Grandview Research. "Motor imagery offers an excellent opportunity as a stimulus-free paradigm for brain–machine interfaces," wrote Woon-Hong Yeo at the Georgia Institute of Technology whose laboratory led the study in collaboration with the University of Kent in the United Kingdom and Yonsei University in the Republic of Korea. The AI, VR with BCI system was assessed on four able-bodied human participants according to a statement released on Tuesday by the Georgia Institute of Technology.

Apple considers using ML to make augmented reality more useful


A patent from Apple suggests the company is considering how machine learning can make augmented reality (AR) more useful. Most current AR applications are somewhat gimmicky, with barely a handful that have achieved any form of mass adoption. Apple's decision to introduce LiDAR in its recent devices has given AR a boost but it's clear that more needs to be done to make applications more useful. A newly filed patent suggests that Apple is exploring how machine learning can be used to automatically (or "automagically," the company would probably say) detect objects in AR. The first proposed use of the technology would be for Apple's own Measure app. Measure's previously dubious accuracy improved greatly after Apple introduced LiDAR but most people probably just grabbed an actual tape measure unless they were truly stuck without one available.

The Future of Enterprise Billing


The connectivity benefits of 5G are expected to make businesses more competitive and give consumers access to more information faster than ever before. Connected cars, smart communities, industrial IoT, healthcare, immersive education--they all will rely on unprecedented opportunities that 5G technology will create. The enterprise market opportunity is driving many telecoms operators' strategies for, and investments in, 5G. Companies are accelerating investment in core and emerging technologies such as cloud, internet of things, robotic process automation, artificial intelligence and machine learning. IoT (Internet of Things), as an example, improving connectivity and data sharing between devices, enabling biometric based transactions; with blockchain, enabling use cases, trade transactions, remittances, payments and investments; and with deep learning and artificial intelligence, utilization of advanced algorithms for high personalization.

Altoida Raises $6.3M Series A to Predict Alzheimer's Disease Risk Using Artificial Intelligence, Machine Learning and Augmented Reality


Altoida Inc. today announced a $6.3 million round of venture capital financing to bring its FDA-cleared and CE Mark-approved medical device and brain health data platform to patients, physicians and researchers around the globe. Led by a team of esteemed neuroscientists, physicians and computer scientists, Altoida uses digital biomarkers to drive better clinical outcomes for brain disease. The Series A round was led by M Ventures, the corporate venture capital arm of the science and technology company Merck KGaA, Darmstadt, Germany, with participation from Grey Sky Venture Partners, VI Partners AG, Alpana Ventures, and FYRFLY Venture Partners. The new capital will be used to further expand Altoida's global presence with an immediate focus on commercialization activities in the US and EU markets. "Altoida is at the forefront of a new era to leverage Artificial Intelligence and Machine Learning to assess brain health," said Alexander Hoffmann, Principal, New Businesses at M Ventures.

Feeling of Presence Maximization: mmWave-Enabled Virtual Reality Meets Deep Reinforcement Learning Artificial Intelligence

This paper investigates the problem of providing ultra-reliable and energy-efficient virtual reality (VR) experiences for wireless mobile users. To ensure reliable ultra-high-definition (UHD) video frame delivery to mobile users and enhance their immersive visual experiences, a coordinated multipoint (CoMP) transmission technique and millimeter wave (mmWave) communications are exploited. Owing to user movement and time-varying wireless channels, the wireless VR experience enhancement problem is formulated as a sequence-dependent and mixed-integer problem with a goal of maximizing users' feeling of presence (FoP) in the virtual world, subject to power consumption constraints on access points (APs) and users' head-mounted displays (HMDs). The problem, however, is hard to be directly solved due to the lack of users' accurate tracking information and the sequence-dependent and mixed-integer characteristics. To overcome this challenge, we develop a parallel echo state network (ESN) learning method to predict users' tracking information by training fresh and historical tracking samples separately collected by APs. With the learnt results, we propose a deep reinforcement learning (DRL) based optimization algorithm to solve the formulated problem. In this algorithm, we implement deep neural networks (DNNs) as a scalable solution to produce integer decision variables and solving a continuous power control problem to criticize the integer decision variables. Finally, the performance of the proposed algorithm is compared with various benchmark algorithms, and the impact of different design parameters is also discussed. Simulation results demonstrate that the proposed algorithm is more 4.14% energy-efficient than the benchmark algorithms.

5 Ways Industrial AI Revolutionizes Manufacturing


Artificial Intelligence (AI) is most commonly applied in manufacturing to improve overall equipment efficiency (OEE) and first-pass yield in production. Over time, manufacturers can use AI to increase uptime and improve quality and consistency, allowing for better forecasting. As with many components of digitization, AI implementation can seem overwhelming. Concerns about how to effectively use and manage billions of data points generated by intuitive computing power and their connected machines are common amongst manufacturers. Many are uncertain how to get started and often attribute their caution in AI adoption to cost, IT requirements, and/or fear of not being "Industry 4.0" ready.