AAAI AI-Alert for Dec 21, 2021
Autonomous Weapons Are Here, but the World Isn't Ready for Them
This may be remembered as the year when the world learned that lethal autonomous weapons had moved from a futuristic worry to a battlefield reality. It's also the year when policymakers failed to agree on what to do about it. On Friday, 120 countries participating in the United Nations' Convention on Certain Conventional Weapons could not agree on whether to limit the development or use of lethal autonomous weapons. Instead, they pledged to continue and "intensify" discussions. "It's very disappointing, and a real missed opportunity," says Neil Davison, senior scientific and policy adviser at the International Committee of the Red Cross, a humanitarian organization based in Geneva.
Artificial intelligence accurately predicts who will develop dementia in two years
Artificial intelligence can predict which people who attend memory clinics will develop dementia within two years with 92 percent accuracy, a largescale new study has concluded. Using data from more than 15,300 patients in the US, research from the University of Exeter found that a form of artificial intelligence called machine learning can accurately tell who will go on to develop dementia. The technique works by spotting hidden patterns in the data and learning who is most at risk. The study, published in JAMA Network Open and funded by funded by Alzheimer's Research UK, also suggested that the algorithm could help reduce the number of people who may have been falsely diagnosed with dementia. The researchers analyzed data from people who attended a network of 30 National Alzheimer's Coordinating Center memory clinics in the US.
The Growing Cost of Deep Learning for Source Code
Recent years have seen a steep increase in the use of artificial intelligence methods in software engineering (AI SE) research. The combination of these two fields has unlocked remarkable new abilities: Lachaux et al.'s recent work on unsupervised machine translation of programming languages,15 for instance, learns to generate Java methods from C with over 80% accuracy--without curated examples. This would surely have sounded like a vision of a distant future just a decade ago, but such quick progress is indicative of the substantial and unique potential of deep learning for software engineering tasks and domains. Yet these abilities come at a price. The "secret ingredient" is data, as epitomized by Lachaux et al.'s work that utilizes 163 billion tokens across three programming languages.
Artificial Intelligence Across Company Borders
Artificial intelligence (AI) has potential to increase global economic activity in the industrial sector by $13 trillion by 2030.6 However, this potential remains largely untapped because of a lack of access to or a failure to effectively leverage data across companies borders.10 AI technologies benefit from large amounts of representative data--often more data than a single company possesses. It is especially challenging to achieve good AI performance in industrial settings with unexpected events or critical system states that are, by definition, rare. Industrial examples are early detections of outages in power systems or predicting machine faults and remaining useful life, for which robust inference is often precluded.
AI Trained on a Diverse Dataset Perform Better Chest X-ray Analysis
A study presented at the Radiological Society of North America (RSNA) 2021 Annual Meeting demonstrates the importance of using racially diverse datasets while training artificial intelligence (AI) systems to ensure fair outcomes. "As the rapid development of deep learning in medicine continues, there are concerns of potential bias when interpreting radiological images," the authors wrote. "As future medical AI systems are approved by regulators, it is crucial that model performance on different racial/ethnic groups is shared to ensure that safe and fair systems are being implemented." The findings were presented by Brandon Price, a medical student at Florida State University College of Medicine in Tallahassee. Many studies have shown that deep learning systems are subjective in their interpretation of data.
Sticky robot hand inspired by geckos combines delicacy and strength
Mechanical hands with human-like fingers are more adaptable than the simple two-pronged clamps found on industrial robots, but they struggle to match their strength. Now a robotic hand with sticky rubber skin inspired by gecko feet combined the features of both: it can delicately pick up a grape and also lift heavy objects. Geckos' feet are covered in tiny hairs that split into even smaller strands that each create a molecular attraction to the material the animal is climbing on. Mark Cutkosky at Stanford University in California and his colleagues have previously created materials that mimic the way a gecko's foot sticks to smooth surfaces and then used them in robots, some of which are designed to latch on to a satellite's smooth surface in space. Now, Cutkosky and his team have improved the design of their latest gecko-inspired material and applied it to a robot hand. The material evenly spreads the load associated with the object being manipulated, which prevents the object's edges peeling away from the robot hand, which can undermine adhesion.
This huge Chinese company is selling video surveillance systems to Iran
A Chinese company is selling its surveillance technology to Iran's Revolutionary Guard, police, and military, according to a new report by IPVM, a surveillance research group. The firm, called Tiandy, is one of the world's largest video surveillance companies, reporting almost $700 million in sales in 2020. The company sells cameras and accompanying AI-enabled software, including facial recognition technology, software that it claims can detect someone's race, and "smart" interrogation tables for use alongside "tiger chairs," which have been widely documented as a tool for torture. The report is a rare look into some specifics of China's strategic relationship with Iran and the ways in which the country disperses surveillance technology to other autocracies abroad. Tiandy's "ethnicity tracking" tool, which has been widely challenged by experts as both inaccurate and unethical, is believed to be one of several AI-based systems the Chinese government uses to repress the Uyghur minority group in the country's Xinjiang province, along with Huawei's face recognition software, emotion-detection AI technologies, and a host of others.