Goto

Collaborating Authors

 2022-03


Mercedes to accept legal responsibility for accidents involving self-driving cars

#artificialintelligence

Mercedes has announced that it will take legal responsibility for any crashes that occur while its self-driving systems are engaged. The company is currently in the process of deploying "Drive Pilot" technology for its new S-Class and EQS saloon models, which is "Level 3" for autonomy on a six-tier system devised by Society of Automotive Engineers, ranging from Level 0 (no automated driver assistance) to Level 5 (the car drives itself everywhere without any input from the vehicle occupants). Level 3 autonomy means that drivers may take their hands off the wheel and undertake other tasks, such as reading a book, while the car assumes full control of all driving functions. However, this is only in specific conditions, such as in low-speed traffic on motorways, and the person in the driver's seat must be able to retake control within a few seconds of an alert from the car. This is a big leap from Level 2 autonomy, which requires hands-on-wheel supervision from the driver at all times, and which is currently commonplace on new cars in the form of adaptive cruise control and automated lane-keeping. Some cars from the likes of Audi, Mercedes, BMW, Genesis and Tesla have such advanced systems that they are considered somewhere between Levels 2 and 3 -- dubbed by experts as Level 2 .


You'll be injecting robots into your bloodstream to fight disease soon

#artificialintelligence

What if there was a magical robot that could cure any disease? Everyone knows there's no one machine that could do that. But maybe a swarm made up of tens of thousands of tiny autonomous micro-bots could? That's the premise laid out by proponents of nanobot medical technology. In science fiction, the big idea usually involves creating tiny metal robots via some sort of magic-adjacent miniaturization technology.

  AI-Alerts: 2022 > 2022-03 > AAAI AI-Alert for Mar 29, 2022 (1.00)
  Country: Oceania > Australia (0.07)
  Industry: Health & Medicine (0.72)

MIT research suggests AI can learn to identify images using synthetic data

#artificialintelligence

The MIT researchers said their generative model requires less memory to store than datasets, which can cost millions of dollars to create. MIT researchers have found a way to classify images using synthetic data, which they claim can rival models trained from real data. In the study, the team created a special type of machine learning model to generate extremely realistic synthetic data, which can then train another model for vision-related tasks. The researchers said that currently, massive amounts of data is required to train a machine to perform image classification tasks, such as identifying damage in satellite photos following a natural disaster. However, the datasets required to train the model can cost millions of dollars to generate.


Miniature medical robots step out from sci-fi

Nature

Cancer drugs usually take a scattergun approach. Chemotherapies inevitably hit healthy bystander cells while blasting tumours, sparking a slew of side effects. It is also a big ask for an anticancer drug to find and destroy an entire tumour -- some are difficult to reach, or hard to penetrate once located. A long-dreamed-of alternative is to inject a battalion of tiny robots into a person with cancer. These miniature machines could navigate directly to a tumour and smartly deploy a therapeutic payload right where it is needed. "It is very difficult for drugs to penetrate through biological barriers, such as the blood–brain barrier or mucus of the gut, but a microrobot can do that," says Wei Gao, a medical engineer at the California Institute of Technology in Pasadena.


Robot dog called in to help manage Pompeii

The Guardian

A four-legged robot called Spot has been deployed to wander around the ruins of ancient Pompeii, identifying structural and safety issues while delving underground to inspect tunnels dug by relic thieves. The dog-like robot is the latest in a series of technologies used as part of a broader project to better manage the archaeological park since 2013, when Unesco threatened to add Pompeii to a list of world heritage sites in peril unless Italian authorities improved its preservation. Spot, made by the US-based Boston Dynamics, is capable of inspecting even the smallest of spaces while "gathering and recording data useful for the study and planning of interventions", park authorities said. The aim, they added, is to "improve both the quality of monitoring of the existing areas, and to further our knowledge of the state of progress of the works in areas undergoing recovery or restoration, and thereby to manage the safety of the site, as well as that of workers." Until Spot came along, no technology of its kind had been developed for archaeological sites, according to Gabriel Zuchtriegel, the director of Pompeii archaeological park. Park authorities have also experimented with a flying laser scanner capable of conducting 3D scans across the 66-hectare (163-acre) site.


Innovative AI technology aids personalized care for diabetes patients needing complex drug treatment

#artificialintelligence

For this smaller group of patients, physicians may have limited clinical decision-making experience or evidence-based guidance for choosing drug combinations. The solution is to expand the number of patients to support development of general principles to guide decision-making. Combining patient data from multiple healthcare institutions, however, requires deep expertise in artificial intelligence (AI) and wide-ranging experience in developing machine learning models using sensitive and complex healthcare data. Hitachi, U of U Health, and Regenstrief researchers partnered to develop and test a new AI method that analyzed electronic health record data across Utah and Indiana and learned generalizable treatment patterns of type 2 diabetes patients with similar characteristics. Those patterns can now be used to help determine an optimal drug regimen for a specific patient.


This Cheetah Robot Taught Itself How to Sprint in a Weird Way

WIRED

It takes years of practice to crawl and then walk well, during which time mothers don't have to worry about their children legging it out of the county. Roboticists don't have that kind of time to spare, however, so they're developing ways for machines to learn to move through trial and error--just like babies, only way, way faster. But MIT scientists announced last week that they got this research platform, a four-legged machine known as Mini Cheetah, to hit its fastest speed ever--nearly 13 feet per second, or 9 miles per hour--not by meticulously hand-coding its movements line by line, but by encouraging digital versions of the machine to experiment with running in a simulated world. What the system landed on is … unconventional. But the researchers were able to port what the virtual robot learned into this physical machine that could then bolt across all kinds of terrain without falling on its, um, face. This technique is known as reinforcement learning.


The Promise of AI in Gene and Cell Therapy Operations

#artificialintelligence

There is no longer any doubt that artificial intelligence (AI) is advancing biological discovery and biomanufacturing operations. In biological discovery, AI systems such as AlphaFold and the Atomic Rotationally Equivariant Scorer are celebrated for their uncanny ability to predict tertiary structures for proteins and RNA molecules. In biomanufacturing, AI systems usually enjoy less fanfare. Yet they can provide valuable functions such as pattern recognition, real-time assessment of batch quality, multivariable control for continuous manufacturing, prediction/optimization of critical process parameters, and anomaly detection. Such functions are critical to the success of gene and cell therapy operations.


Watch a robot peel a banana without crushing it into oblivion

New Scientist

A robot trained by machine learning that imitates a human demonstrator can successfully peel a banana without smashing it to smithereens. Handling soft fruit is a challenge for robots, which often lack the dexterity and nuanced touch to process items without destroying them. The uneven shape of fruit – which can vary significantly even with the same type of fruit – can also flummox the computer-vision algorithms that often act as the brains of such robots. Heecheol Kim at the University of Tokyo and his colleagues have developed a machine-learning system that powers a robot, which has two arms and hands that grasp between two "fingers". First, a human operating the robot peeled hundreds of bananas, creating 811 minutes of demonstration data to train the robot to do it by itself.


Trustworthy AI: How to ensure trust and ethics in AI

#artificialintelligence

Did you miss a session at the Data Summit? A pragmatic and direct approach to ethics and trust in artificial intelligence (AI) -- who would not want that? This is how Beena Ammanath describes her new book, Trustworthy AI. Ammanath is the executive director of the Global Deloitte AI Institute. She has had stints at GE, HPE and Bank of America, in roles such as vice president of data science and innovation, CTO of artificial intelligence and lead of data and analytics.