Space robotics startup GITAI and the Japan Aerospace Exploration Agency (JAXA) are teaming up to produce the world's first robotics demonstration in space by a private company. The new agreement under the JAXA Space Innovation through Partnership and Co-creation (J-SPARC) initiative aims to demonstrate the potential for robots to automate of the processing of specific tasks aboard the International Space Station (ISS). Robotics is altering many aspects of our lives in many fields and one where it is particularly attractive is in the exploration and exploitation of space. Ironically, the great strides made in manned spaceflight since the first Vostok mission lifted off in 1961 have shown that not only is supporting astronauts in orbit challenging and expensive, there are also many tasks, like microgravity experiments, where the human touch isn't the best choice. These tasks often require complex, precise, and subtle movements that demand either a highly specialized and expensive bespoke apparatus or a robot.
An innovative artificial intelligence (AI) tool developed by NASA has helped identify a cluster of craters on Mars that formed within the last decade. The new machine-learning algorithm, an automated fresh impact crater classifier, was created by researchers at NASA's Jet Propulsion Laboratory (JPL) in California -- and represents the first time artificial intelligence has been used to identify previously unknown craters on the Red Planet, according to a statement from NASA. Scientists have fed the algorithm more than 112,000 images taken by the Context Camera on NASA's Mars Reconnaissance Orbiter (MRO). The program is designed to scan the photos for changes to Martian surface features that are indicative of new craters. In the case of the algorithm's first batch of finds, scientists think these craters formed from a meteor impact between March 2010 and May 2012.
Microsoft, in collaboration with MITRE, IBM, NVIDIA, and Bosch, has released a new open framework that aims to help security analysts detect, respond to, and remediate adversarial attacks against machine learning (ML) systems. Called the Adversarial ML Threat Matrix, the initiative is an attempt to organize the different techniques employed by malicious adversaries in subverting ML systems. Just as artificial intelligence (AI) and ML are being deployed in a wide variety of novel applications, threat actors can not only abuse the technology to power their malware but can also leverage it to fool machine learning models with poisoned datasets, thereby causing beneficial systems to make incorrect decisions, and pose a threat to stability and safety of AI applications. Indeed, ESET researchers last year found Emotet -- a notorious email-based malware behind several botnet-driven spam campaigns and ransomware attacks -- to be using ML to improve its targeting. Then earlier this month, Microsoft warned about a new Android ransomware strain that included a machine learning model that, while yet to be integrated into the malware, could be used to fit the ransom note image within the screen of the mobile device without any distortion.
Pfizer and IBM researchers claim to have developed a machine learning technique that can predict Alzheimer's disease years before symptoms develop. By analyzing small samples of language data obtained from clinical verbal tests, the team says their approach achieved 71% accuracy when tested against a group of cognitively healthy people. Alzheimer's disease begins with vague, often misinterpreted signs of mild memory loss followed by a slow, progressively serious decline in cognitive ability and quality of life. According to the nonprofit Alzheimer's Association, more than 5 million Americans of all ages have Alzheimer's, and every state is expected to see at least a 14% rise in the prevalence of Alzheimer's between 2017 and 2025. Due to the nature of Alzheimer's disease and how it takes hold in the brain, it's likely that the best way to delay its onset is through early intervention.
The Millennium Institute for Foundational Research on Dataa (IMFD) started its operations in June 2018, funded by the Millennium Science Initiative of the Chilean National Agency of Research and Development.b IMFD is a joint initiative led by Universidad de Chile and Universidad Católica de Chile, with the participation of five other Chilean universities: Universidad de Concepción, Universidad de Talca, Universidad Técnica Federico Santa María, Universidad Diego Portales, and Universidad Adolfo Ibáñez. IMFD aims to be a reference center in Latin America related to state-of-the-art research on the foundational problems with data, as well as its applications to tackling diverse issues ranging from scientific challenges to complex social problems. As tasks of this kind are interdisciplinary by nature, IMFD gathers a large number of researchers in several areas that include traditional computer science areas such as data management, Web science, algorithms and data structures, privacy and verification, information retrieval, data mining, machine learning, and knowledge representation, as well as some areas from other fields, including statistics, political science, and communication studies. IMFD currently hosts 36 researchers, seven postdoctoral fellows, and more than 100 students.
Forests are the major terrestrial ecosystem responsible for carbon sequestration and storage. The Amazon rainforest is the world's largest tropical rainforest encompassing up to 2,124,000 square miles, covering a large area in South America including nine countries. The majority of that area (69%) lies in Brazil. Thus, Amazonia holds about 20% of the total carbon contained in the world's terrestrial vegetation.1,5,7 But the rampant deforestation due to illegal logging, mining, cattle ranching, and soy plantation are examples of threats to the vast region.
Transaction data is like a friendship tie: both parties must respect the relationship and if one party exploits it the relationship sours. As data becomes increasingly valuable, firms must take care not to exploit their users or they will sour their ties. Ethical uses of data cover a spectrum: at one end, using patient data in healthcare to cure patients is little cause for concern. At the other end, selling data to third parties who exploit users is serious cause for concern.2 Between these two extremes lies a vast gray area where firms need better ways to frame data risks and rewards in order to make better legal and ethical choices.
In today's world, it is nearly impossible to avoid voice-controlled digital assistants. From the interactive intelligent agents used by corporations, government agencies, and even personal devices, automated speech recognition (ASR) systems, combined with machine learning (ML) technology, increasingly are being used as an input modality that allows humans to interact with machines, ostensibly via the most common and simplest way possible: by speaking in a natural, conversational voice. Yet as a study published in May 2020 by researchers from Stanford University indicated, the accuracy level of ASR systems from Google, Facebook, Microsoft, and others vary widely depending on the speaker's race. While this study only focused on the differing accuracy levels for a small sample of African American and white speakers, it points to a larger concern about ASR accuracy and phonological awareness, including the ability to discern and understand accents, tonalities, rhythmic variations, and speech patterns that may differ from the voices used to initially train voice-activated chatbots, virtual assistants, and other voice-enabled systems. The Stanford study, which was published in the journal Proceedings of the National Academy of Sciences, measured the error rates of ASR technology from Amazon, Apple, Google, IBM, and Microsoft, by comparing the system's performance in understanding identical phrases (taken from pre-recorded interviews across two datasets) spoken by 73 black and 42 white speakers, then comparing the average word error rate (WER) for black and white speakers.
A pandemic is raging with devastating consequences, and long-standing problems with racial bias and political polarization are coming to a head. Artificial intelligence (AI) has the potential to help us deal with these challenges. However, AI's risks have become increasingly apparent. Scholarship has illustrated cases of AI opacity and lack of explainability, design choices that result in bias, negative impacts on personal well-being and social interactions, and changes in power dynamics between individuals, corporations, and the state, contributing to rising inequalities. Whether AI is developed and used in good or harmful ways will depend in large part on the legal frameworks governing and regulating it.
Autonomous vehicle design involves an almost incomprehensible combination of engineering tasks including sensor fusion, path planning, and predictive modeling of human behavior. But despite the best efforts to consider all possible real world outcomes, things can go awry. More than two and a half years ago, in Tempe, Arizona, an Uber "self-driving" car crashed into pedestrian Elaine Herzberg, killing her. In mid-September, the safety driver behind the wheel of that car, Rafaela Vasquez, was charged with negligent homicide. Uber's test vehicle was driving 39 mph when it struck Herzberg. Uber's sensors detected her six seconds before impact but determined that the object sensed was a false positive.