2022-11
Counterfactual explanations for reinforcement learning: interview with Jasmina Gajcin
In this interview, Jasmina told us more about counterfactuals and some of the challenges of implementing them in reinforcement learning settings. RL enables intelligent agents to learn sequential tasks through a trial-and-error process. In the last decade, RL algorithms have been developed for healthcare, autonomous driving, games etc. (Li et al. 2017). However, RL agents often rely on neural networks, making their decision-making process difficult to understand and hindering their adoption to real-life tasks (Puiutta et al. 2020). In supervised learning, counterfactual explanations have been used to answer the question: Given that model produces output A for input features f1 …fk, how can the features be changed so that model outputs a desired output B? (Verma et al. 2020) Counterfactual explanations give actionable advice to humans interacting with an AI system on how to change their features and achieve a desired output.
Can robots and AI help address the world's food security issues?
Ending global hunger has long been a critical goal for the global community. When the United Nations' Sustainable Development Goals were released in 2014, ending hunger, food insecurity and all forms of malnutrition formed SDG2. Though there has been some progress in the fight against hunger – ongoing conflicts, climate change, economic downturns and the COVID-19 pandemic have been major barriers to achieving SDG2. As of 2020, according to the UN, 720 and 811 million people globally faced hunger, and current estimates suggest that 660 million people may still face hunger in 2030. Professor Salah Sukkarieh, a robotics engineer at the University of Sydney's Australian Centre for Field Robotics, will this week speak at the United Nations Food and Agriculture Organization's (FAO) Global Conference on Sustainable Plant Production in Rome (2-4 November).
Watch this robot dog scramble over tricky terrain just by using its camera
Unlike existing robots on the market, such as Boston Dynamics' Spot, which moves around using internal maps, this robot uses cameras alone to guide its movements in the wild, says Ashish Kumar, a graduate student at UC Berkeley, who is one of the authors of a paper describing the work; it's due to be presented at the Conference on Robot Learning next month. Other attempts to use cues from cameras to guide robot movement have been limited to flat terrain, but they managed to get their robot to walk up stairs, climb on stones, and hop over gaps. The four-legged robot is first trained to move around different environments in a simulator, so it has a general idea of what walking in a park or up and down stairs is like. When it's deployed in the real world, visuals from a single camera in the front of the robot guide its movement. The robot learns to adjust its gait to navigate things like stairs and uneven ground using reinforcement learning, an AI technique that allows systems to improve through trial and error.
Using machine learning to help generalize automated chemistry
Researchers combined machine learning and a molecule-making machine to find the best conditions for automated complex chemistry. Pictured, from left: University of Illinois chemistry professor Martin D. Burke, materials science and engineering professor Charles M. Schroeder, graduate student Nicholas Angello and postdoctoral researcher Vandana Rathore. Pictured on the screen behind them are international collaborators, led by professors Bartosz A. Grzybowski and Alán Aspuru-Guzik. Artificial intelligence, "building-block" chemistry and a molecule-making machine were combined to find the best general reaction conditions for synthesizing chemicals important to biomedical and materials research – a finding that could speed innovation and drug discovery as well as make complex chemistry automated and accessible. With the machine-generated optimized conditions, researchers at the University of Illinois Urbana-Champaign and collaborators in Poland and Canada doubled the average yield of a special, hard-to-optimize type of reaction linking carbon atoms together in pharmaceutically important molecules.
Researchers made breakthrough in reconstruction for cryogenic electron tomography
In a study published in Nature Communication recently, a team led by Prof. BI Guoqiang from the University of Science and Technology of China (USTC) and Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences (CAS), together with collaborators from the United States, developed a software package named IsoNet for the isotropic reconstruction for cryogenic electron tomography (cryoET). Their work effectively solved the intrinsic "missing-wedge" problem and low signal-to-noise ratio problems in cryoET. Anisotropic resolution caused by the intrinsic "missing-wedge" problem has long been a challenge when using CryoET for the visualization of cellular structures. To solve this, the team developed IsoNet, a software package based on iterative self-supervised deep learning artificial neural network. Using the rotated cryoET tomographic 3D reconstruction data as the training set, their algorithm is able to perform missing-edge correction on the cryoET data. Simultaneously, a denoising process is added to the IsoNet, allowing the artificial neural network to recover missing information and denoise tomographic 3D data simultaneously.
Learning to efficiently plan robust frictional multi-object grasps: interview with Wisdom Agboh
When skilled waiters clear tables, they grasp multiple utensils and dishes in a single motion. On the other hand, robots in warehouses are inefficient and can only pick a single object at a time. This research leverages neural networks and fundamental robot grasping theorems to build an efficient robot system that grasps multiple objects at once. To quickly deliver your online orders, amidst increasing demand and labour shortages, fast and efficient robot picking systems in warehouses have become indispensable. This research studies the fundamentals of multi-object robot grasping. It is easy for humans, yet extremely challenging for robots.
Autonomous Vehicles Seek Traction in Austin
The road to driverless cars has been a long and winding one. Look no further than the Oct. 26 news that Argo AI, the autonomous vehicle company backed by Ford Motors and Volkswagen, would be shutting down. Only a few weeks earlier, Argo AI had launched a partnership with Lyft to offer supervised autonomous rides around Austin. It had previously announced a partnership with Walmart to carry out deliveries. The company, which began operations in Austin in 2019, had about 20 vehicles as of October that could be seen around town.
Lyft Aspired to Kill Car Ownership. Now It Aims to Profit From It
Lyft customers know it as the bright-pink app to tap when they need a car ride or to rent a bike or scooter. Today the company announced it wants to be the place to go to care for your own car. Lyft's app will offer a way to find and reserve parking in 16 cities, summon roadside assistance, and schedule vehicle maintenance. Adding those new services is a small step for an app but part of a much bigger shift in ride hailing. As Lyft and its larger competitor Uber search for a way to finally generate a profit, some visions they once espoused for the future have been tweaked, if not left on the side of the road.
How natural language processing helps promote inclusivity in online communities
Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. To create healthy online communities, companies need better strategies to weed out harmful posts. In this VB On-Demand event, AI/ML experts from Cohere and Google Cloud share insights into the new tools changing how moderation is done. Game players experience a staggering amount of online abuse. A recent study found that five out of six adults (18–45) experienced harassment in online multiplayer games, or over 80 million gamers.
Researchers develop a meta-reinforcement learning algorithm for traffic signal control
Traffic signal control affects the daily life of people living in urban areas. The existing system relies on a theory- or rule-based controller in charge of altering the traffic lights based on traffic conditions. The objective is to reduce vehicle delay during unsaturated traffic conditions and maximize the vehicle throughput during congestion. However, the existing traffic signal controller cannot fulfill such objectives, and a human controller can only manage a few intersections. In view of this, recent advancements in artificial intelligence have focused on enabling alternate ways of traffic signal control. Current research on this front has explored reinforcement learning (RL) algorithms as a possible approach.