ScienceDaily > Robotics Research
New model reduces bias and enhances trust in AI decision-making and knowledge organization
Traditional machine learning models often yield biased results, favouring groups with large populations or being influenced by unknown factors, and take extensive effort to identify from instances containing patterns and sub-patterns coming from different classes or primary sources. The medical field is one area where there are severe implications for biased machine learning results. Hospital staff and medical professionals rely on datasets containing thousands of medical records and complex computer algorithms to make critical decisions about patient care. Machine learning is used to sort the data, which saves time. However, specific patient groups with rare symptomatic patterns may go undetected, and mislabeled patients and anomalies could impact diagnostic outcomes.
Advanced universal control system may revolutionize lower limb exoskeleton control and optimize user experience
While advances in wearable robotics have helped restore mobility for people with lower limb impairments, current control methods for exoskeletons are limited in their ability to provide natural and intuitive movements for users. This can compromise balance and contribute to user fatigue and discomfort. Few studies have focused on the development of robust controllers that can optimize the user's experience in terms of safety and independence. Existing exoskeletons for lower limb rehabilitation employ a variety of technologies to help the user maintain balance, including special crutches and sensors, according to co-author Ghaith Androwis, PhD, senior research scientist in the Center for Mobility and Rehabilitation Engineering Research at Kessler Foundation and director of the Center's Rehabilitation Robotics and Research Laboratory. Exoskeletons that operate without such helpers allow more independent walking, but at the cost of added weight and slow walking speed.
Robot 'chef' learns to recreate recipes from watching food videos
The researchers, from the University of Cambridge, programmed their robotic chef with a'cookbook' of eight simple salad recipes. After watching a video of a human demonstrating one of the recipes, the robot was able to identify which recipe was being prepared and make it. In addition, the videos helped the robot incrementally add to its cookbook. At the end of the experiment, the robot came up with a ninth recipe on its own. Their results, reported in the journal IEEE Access, demonstrate how video content can be a valuable and rich source of data for automated food production, and could enable easier and cheaper deployment of robot chefs.
Effective as a collective: Researchers investigate the swarming behavior of microrobots
Researchers are looking for new ways to perform tasks on the micro- and nanoscale that are otherwise difficult to realize, particularly as the miniaturization of devices and components is beginning to reach physical limits. One new option being considered is the use of collectives of robotic units in place of a single robot to complete a task. "The task-solving capabilities of one microrobot are limited due to its small size," said Professor Thomas Speck, who headed the study at Mainz University. "But a collective of such robots working together may well be able to carry out complex assignments with considerable success." Statistical physics becomes relevant here in that it analyzes models to describe how such collective behavior may emerge from interactions, comparable to bird behavior when they flock together.
Robots and Rights: Confucianism Offers Alternative
The analysis, by a researcher at Carnegie Mellon University (CMU), appears in Communications of the ACM, published by the Association for Computing Machinery. "People are worried about the risks of granting rights to robots," notes Tae Wan Kim, Associate Professor of Business Ethics at CMU's Tepper School of Business, who conducted the analysis. "Granting rights is not the only way to address the moral status of robots: Envisioning robots as rites bearers -- not a rights bearers -- could work better." Although many believe that respecting robots should lead to granting them rights, Kim argues for a different approach. Confucianism, an ancient Chinese belief system, focuses on the social value of achieving harmony; individuals are made distinctively human by their ability to conceive of interests not purely in terms of personal self-interest, but in terms that include a relational and a communal self.
Robot: I'm sorry. Human: I don't care anymore!
Similar to human co-workers, robots can make mistakes that violate a human's trust in them. When mistakes happen, humans often see robots as less trustworthy, which ultimately decreases their trust in them. The study examines four strategies that might repair and mitigate the negative impacts of these trust violations. These trust strategies were apologies, denials, explanations and promises on trustworthiness. An experiment was conducted where 240 participants worked with a robot co-worker to accomplish a task, which sometimes involved the robot making mistakes.
Using everyday WiFi to help robots see and navigate better indoors
The technology consists of sensors that use WiFi signals to help the robot map where it's going. Most systems rely on optical light sensors such as cameras and LiDARs. In this case, the so-called "WiFi sensors" use radio frequency signals rather than light or visual cues to see, so they can work in conditions where cameras and LiDARs struggle -- in low light, changing light, and repetitive environments such as long corridors and warehouses. And by using WiFi, the technology could offer an economical alternative to expensive and power hungry LiDARs, the researchers noted. A team of researchers from the Wireless Communication Sensing and Networking Group, led by UC San Diego electrical and computer engineering professor Dinesh Bharadia, will present their work at the 2022 International Conference on Robotics and Automation (ICRA), which will take place from May 23 to 27 in Philadelphia.
- Information Technology > Communications > Networks (1.00)
- Information Technology > Artificial Intelligence > Robots (0.87)
Autonomous robot plays with NanoLEGO
Rapid prototyping, the fast and cost-effective production of prototypes or models -- better known as 3D printing -- has long since established itself as an important tool for industry. "If this concept could be transferred to the nanoscale to allow individual molecules to be specifically put together or separated again just like LEGO bricks, the possibilities would be almost endless, given that there are around 1060 conceivable types of molecule," explains Dr. Christian Wagner, head of the ERC working group on molecular manipulation at Forschungszentrum Jülich. There is one problem, however. Although the scanning tunnelling microscope is a useful tool for shifting individual molecules back and forth, a special custom "recipe" is always required in order to guide the tip of the microscope to arrange molecules spatially in a targeted manner. This recipe can neither be calculated, nor deduced by intuition -- the mechanics on the nanoscale are simply too variable and complex.
Soldiers could teach future robots how to outperform humans
At the U.S. Army Combat Capabilities Development Command's Army Research Laboratory and the University of Texas at Austin, researchers designed an algorithm that allows an autonomous ground vehicle to improve its existing navigation systems by watching a human drive. The team tested its approach -- called adaptive planner parameter learning from demonstration, or APPLD -- on one of the Army's experimental autonomous ground vehicles. "Using approaches like APPLD, current Soldiers in existing training facilities will be able to contribute to improvements in autonomous systems simply by operating their vehicles as normal," said Army researcher Dr. Garrett Warnell. "Techniques like these will be an important contribution to the Army's plans to design and field next-generation combat vehicles that are equipped to navigate autonomously in off-road deployment environments." Rather than replacing a classical system altogether, APPLD learns how to tune the existing system to behave more like the human demonstration.