Today, we are bombarded by messages about the ways in which artificial intelligence (AI) is changing our world and its future promises and perils. But today's AI, called machine learning, is very different from much of AI in the past. From the 1970s until the 1990s, a very different approach, called "expert systems," appeared poised to radically change society in many of the same ways that today's machine learning seems. Expert systems seek to encode into software systems the experience and understanding of the finest human specialists in everything from diagnosing an infectious disease to identifying the sonar fingerprint of enemy submarines, and then have these systems suggest reasoned decisions and conclusions in new, real-world cases. Today, many of these expert systems are commonplace in everything from systems for maintenance and repair, to automated customer support systems of various sorts.
In January 2017, a group of artificial intelligence researchers gathered at the Asilomar Conference Grounds in California and developed 23 principles for artificial intelligence, which was later dubbed the Asilomar AI Principles. The sixth principle states that "AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible." Thousands of people in both academia and the private sector have since signed on to these principles, but, more than three years after the Asilomar conference, many questions remain about what it means to make AI systems safe and secure. Verifying these features in the context of a rapidly developing field and highly complicated deployments in health care, financial trading, transportation, and translation, among others, complicates this endeavor. Much of the discussion to date has centered on how beneficial machine learning algorithms may be for identifying and defending against computer-based vulnerabilities and threats by automating the detection of and response to attempted attacks.1 Conversely, concerns have been raised that using AI for offensive purposes may make cyberattacks increasingly difficult to block or defend against by enabling rapid adaptation of malware to adjust to restrictions imposed by countermeasures and security controls.2
In a newly published paper on the preprint server Arxiv.org, Through techniques like compute-efficient machine learning, federated learning, and data sovereignty, the coauthors assert scientists and practitioners have the power to cut contributions to the carbon footprint while restoring trust in historically opaque systems. Sustainability, privacy, and transparency remain underaddressed and unsolved challenges in AI. In June 2019, researchers at the University of Massachusetts at Amherst released a study estimating that the amount of power required for training and searching a given model involves the emission of roughly 626,000 pounds of carbon dioxide -- equivalent to nearly 5 times the lifetime emissions of the average U.S. car. Partnerships like those pursued by DeepMind and the U.K.'s National Health Service conceal the true nature of AI systems being developed and piloted.
A few months ago, NASA unveiled its next-generation space suit that will be worn by astronauts when they return to the moon in 2024 as part of the agency's plan to establish a permanent human presence on the lunar surface. The Extravehicular Mobility Unit--or xEMU--is NASA's first major upgrade to its space suit in nearly 40 years and is designed to make life easier for astronauts who will spend a lot of time kicking up moon dust. It will allow them to bend and stretch in ways they couldn't before, easily don and doff the suit, swap out components for a better fit, and go months without making a repair. Instead, they're hidden away in the xEMU's portable life-support system, the astro backpack that turns the space suit from a bulky piece of fabric into a personal spacecraft. It handles the space suit's power, communications, oxygen supply, and temperature regulation so that astronauts can focus on important tasks like building launch pads out of pee concrete.
Kash: The police are supposed to use facial recognition identification only as an investigative lead. But instead, people treat facial recognition as a kind of magic. And that's why you get a case where someone was arrested based on flawed software combined with inadequate police work. Witness testimony is also very troubling. That has been a selling point for many facial recognition technologies.
Amazon for several years has worked on self-driving technology to deliver goods, a natural fit with its shopping business. Last year, it invested in Aurora, a driverless-technology start-up. Mr. Wilke expressed concerns in the past that Uber, through its ride-hailing business, could build a direct delivery relationship with customers that it could use to compete with Amazon, according to a person with direct knowledge of the comments. He would speak only anonymously because he feared retaliation for discussing internal conversations. Uber has said it wants to be the Amazon of transportation, though its self-driving ambitions have been derailed by cost-cutting and legal battles.
Computer scientists at Loughborough University in the U.K. have developed artificial intelligence algorithms that could revolutionize player performance analysis for football (soccer) clubs. Computer scientists at Loughborough University in the U.K. have developed artificial intelligence algorithms that could revolutionize player performance analysis for football (soccer) clubs. The researchers designed a hybrid system that accelerates and supplements human data entry with camera-based automation to meet demand for timely performance data generated from large amounts of videos. The team applied the latest computer vision and deep learning technologies to identify actions by detecting players' body poses and limbs, and trained the deep neural network to track individual players and capture data on individual performance throughout the match video. Loughborough's Baihua Li said the new technology "will allow a much greater objective interpretation of the game as it highlights the skills of players and team cooperation."
A US university's claim it can use facial recognition to "predict criminality" has renewed debate over racial bias in technology. Harrisburg University researchers said their software "can predict if someone is a criminal, based solely on a picture of their face". The software "is intended to help law enforcement prevent crime", it said. But 1,700 academics have signed an open letter demanding the research remains unpublished. One Harrisburg research member, a former police officer, wrote: "Identifying the criminality of [a] person from their facial image will enable a significant advantage for law-enforcement agencies and other intelligence agencies to prevent crime from occurring."
A da Vinci surgical robot system performs heart surgery in 2017 at a hospital in Hefei, China.Credit: Shutterstock In 2006, China highlighted the importance of robotics in its 15-year plan for science and technology. In 2011, the central government fleshed out these ambitions in its 12th five-year plan, specifying that robots should be used to support society in a wide range of roles, from helping emergency services during natural disasters and firefighting, to performing complex surgery and aiding in medical rehabilitation. Guang-Zhong Yang, head of the Institute of Medical Robotics at Shanghai Jiao Tong University, says that China's robotics research output has been growing steadily for two decades, driven by three major factors: "The clinical utilization of robotics; increased funding levels driven by national planning needs; and advances in engineering in areas such as precision mechatronics, medical imaging, artificial intelligence and new materials for making robots." Yang points out that funding levels for medical robotics from the National Natural Science Foundation of China and the Ministry of Science and Technology began to increase more sharply in 2011 compared to the previous decade. The accompanying rises in research output are closely related to the introduction of specialized robotics equipment in medical-research facilities, says Yao Li, a research scientist at Stanford Robotics Laboratory in California and founder of the company Borns Medical Robotics, based in both Chengdu, China, and Silicon Valley, California.
A photo of the alleged suspect in a theft case in Detroit, left, next to the driver's license photo of Robert Williams. An algorithm said Williams was the suspect, but he and his lawyers say the tool produced a false hit. A photo of the alleged suspect in a theft case in Detroit, left, next to the driver's license photo of Robert Williams. An algorithm said Williams was the suspect, but he and his lawyers say the tool produced a false hit. Police in Detroit were trying to figure out who stole five watches from a Shinola retail store.