Goto

Collaborating Authors

AI-Alerts


Spatial Concepts in the Conversation With a Computer

Communications of the ACM

Human interactions with the physical environment are often mediated through information services, and sometimes depend on them. These human interactions with their environment relate to a range of scales,28 in the scenario here from the "west of the city" to the "back of the store," or beyond the scenario to "the cat is under the sofa." These interactions go far beyond references to places that are recorded in geographic gazetteers,37 both in scale (the place where the cat is) and conceptualization (the place that forms the west of the city29), or that fit to the classical coordinate-based representations of digital maps. And yet, these kinds of services have to use such digital representations of environments, such as digital maps, building information models, knowledge bases, or just text/documents. Also, their abilities to interact are limited to either fusing with the environment,44 or using media such as maps, photos, augmented reality, or voice. These interactions also happen in a vast range of real-world contexts, or in situ, in which conversation partners typically adapt their conversational strategies to their interlocutor, based on mutual information, activities, and the shared situation.2 Verbal information sharing and conversations about places may also be more suitable when visual communication through maps or imagery is inaccessible, distracting, or irrelevant, such as when navigating in a familiar shopping mall.


The Limits of Differential Privacy (and Its Misuse in Data Release and Machine Learning)

Communications of the ACM

The traditional approach to statistical disclosure control (SDC) for privacy protection is utility-first. Since the 1970s, national statistical institutes have been using anonymization methods with heuristic parameter choice and suitable utility preservation properties to protect data before release. Their goal is to publish analytically useful data that cannot be linked to specific respondents or leak confidential information on them. In the late 1990s, the computer science community took another angle and proposed privacy-first data protection. In this approach a privacy model specifying an ex ante privacy condition is enforced using one or several SDC methods, such as noise addition, generalization, or microaggregation.



Robots may soon be able to reproduce - will this change how we think about evolution? Emma Hart

The Guardian

From the bottom of the oceans to the skies above us, natural evolution has filled our planet with a vast and diverse array of lifeforms, with approximately 8 million species adapted to their surroundings in a myriad of ways. Yet 100 years after Karel Čapek coined the term robot, the functional abilities of many species still surpass the capabilities of current human engineering, which has yet to convincingly develop methods of producing robots that demonstrate human-level intelligence, move and operate seamlessly in challenging environments, and are capable of robust self-reproduction. But could robots ever reproduce? This, undoubtedly, forms a pillar of "life" as shared by all natural organisms. A team of researchers from the UK and the Netherlands have recently demonstrated a fully automated technology to allow physical robots to repeatedly breed, evolving their artificial genetic code over time to better adapt to their environment.


Candy Shop Slaughter is a video game concept created by AI

#artificialintelligence

It is possible for artificial intelligence to create a video game. Contrary to popular opinion and hopes for humanity, an AI came up with the basic design for a video game called Candy Shop Slaughter. The game has all of the elements needed for success in the competitive mobile game industry. Games are thriving despite the pandemic and video game jobs are growing in spite of the competition from automation. Video games are a creative art, and it's hard to believe that a machine can come up with the kind of creativity needed to make such a work.


Baltimore May Soon Ban Face Recognition for Everyone but Cops

WIRED

After years of failed attempts to curb surveillance technologies, Baltimore is close to enacting one of the nation's most stringent bans on facial recognition. But Baltimore's proposed ban would be very different from laws in San Francisco or Portland, Oregon: It would last for only one year, police would be exempt, and certain private uses of the tech would become illegal. City councilmember Kristerfer Burnett, who introduced the proposed ban, says it was shaped by the nuances of Baltimore, though critics complain it could unfairly penalize, or even jail, private citizens who use the tech. Last year, Burnett introduced a version of the bill that would have banned city use of facial recognition permanently. When that failed, he instead introduced this version, with a built-in one year "sunset" clause requiring council approval to be extended.


Synthetic data in machine learning for medicine and healthcare

#artificialintelligence

As artificial intelligence (AI) for applications in medicine and healthcare undergoes increased regulatory analysis and clinical adoption, the data used to train the algorithms are undergoing increasing scrutiny. Scrutiny of the training data is central to understanding algorithmic biases and pitfalls. These can arise from datasets with sample-selection biases -- for example, from a hospital that admits patients with certain socioeconomic backgrounds, or medical images acquired with one particular type of equipment or camera model. Algorithms trained with biases in sample selection typically fail when deployed in settings sufficiently different from those in which the trained data were acquired1. Biases can also arise owing to class imbalances -- as is typical of data associated with rare diseases -- which degrade the performance of trained AI models for diagnosis and prognosis.


Stealthy marine robot begins studying mysterious deep-water life

New Scientist

A stealthy autonomous underwater robot that can track elusive underwater creatures without disturbing them could help us better understand the largest daily migration of life on Earth. Mesobot, a 250-kilogram robot that operates either unconnected to a power source or tethered with a lightweight fibre-optic cable, is able to move around below the surface unobtrusively. The ocean's twilight zone – known more formally as the mesopelagic zone – lies between about 200 metres and 1 kilometre in depth. It is the site of the diel vertical migration (DVM), a daily phenomenon during which deep-dwelling animals come closer to the surface to feed on the more plentiful food supplies found there, while dodging predators. The DVM is seen by biologists as a very important way in which nutrients – and carbon dioxide captured via photosynthesis – can be rapidly transported to depth, where carbon can be stored for the long term.


Reports of the Association for the Advancement of Artificial Intelligence's 2021 Spring Symposium Series

Interactive AI Magazine

The Association for the Advancement of Artificial Intelligence's 2021 Spring Symposium Series was held virtually from March 22-24, 2021. There were ten symposia in the program: Applied AI in Healthcare: Safety, Community, and the Environment, Artificial Intelligence for K-12 Education, Artificial Intelligence for Synthetic Biology, Challenges and Opportunities for Multi-Agent Reinforcement Learning, Combining Machine Learning and Knowledge Engineering, Combining Machine Learning with Physical Sciences, Implementing AI Ethics, Leveraging Systems Engineering to Realize Synergistic AI/Machine-Learning Capabilities, Machine Learning for Mobile Robot Navigation in the Wild, and Survival Prediction: Algorithms, Challenges and Applications. This report contains summaries of all the symposia. The two-day international virtual symposium included invited speakers, presenters of research papers, and breakout discussions from attendees around the world. Registrants were from different countries/cities including the US, Canada, Melbourne, Paris, Berlin, Lisbon, Beijing, Central America, Amsterdam, and Switzerland. We had active discussions about solving health-related, real-world issues in various emerging, ongoing, and underrepresented areas using innovative technologies including Artificial Intelligence and Robotics. We primarily focused on AI-assisted and robot-assisted healthcare, with specific focus on areas of improving safety, the community, and the environment through the latest technological advances in our respective fields. The day was kicked off by Raj Puri, Physician and Director of Strategic Health Initiatives & Innovation at Stanford University spoke about a novel, automated sentinel surveillance system his team built mitigating COVID and its integration into their public-facing dashboard of clinical data and metrics. Selected paper presentations during both days were wide ranging including talks from Oliver Bendel, a Professor from Switzerland and his Swiss colleague, Alina Gasser discussing co-robots in care and support, providing the latest information on technologies relating to human-robot interaction and communication. Yizheng Zhao, Associate Professor at Nanjing University and her colleagues from China discussed views of ontologies with applications to logical difference computation in the healthcare sector. Pooria Ghadiri from McGill University, Montreal, Canada discussed his research relating to AI enhancements in health-care delivery for adolescents with mental health problems in the primary care setting.


Tech Companies Are Training AI to Read Your Lips

#artificialintelligence

The task is incredibly challenging--even expert human lip readers are actually pretty poor at word-for-word interpretation. In 2018, Google subsidiary Deepmind published research unveiling its latest full-sentence lip-reading system. The AI achieved a word error rate (the percent of words it got wrong) of 41 percent on videos containing full sentences. Human lip readers viewing a similar sample of video-only clips had word error rates of 93 percent when given no context about the subject matter and 86 percent when given the video's title, subject category, and several words in the sentence. That study was conducted using a large, custom-curated dataset.