Collaborating Authors

Reports of the Association for the Advancement of Artificial Intelligence's 2021 Spring Symposium Series

Interactive AI Magazine

The Association for the Advancement of Artificial Intelligence's 2021 Spring Symposium Series was held virtually from March 22-24, 2021. There were ten symposia in the program: Applied AI in Healthcare: Safety, Community, and the Environment, Artificial Intelligence for K-12 Education, Artificial Intelligence for Synthetic Biology, Challenges and Opportunities for Multi-Agent Reinforcement Learning, Combining Machine Learning and Knowledge Engineering, Combining Machine Learning with Physical Sciences, Implementing AI Ethics, Leveraging Systems Engineering to Realize Synergistic AI/Machine-Learning Capabilities, Machine Learning for Mobile Robot Navigation in the Wild, and Survival Prediction: Algorithms, Challenges and Applications. This report contains summaries of all the symposia. The two-day international virtual symposium included invited speakers, presenters of research papers, and breakout discussions from attendees around the world. Registrants were from different countries/cities including the US, Canada, Melbourne, Paris, Berlin, Lisbon, Beijing, Central America, Amsterdam, and Switzerland. We had active discussions about solving health-related, real-world issues in various emerging, ongoing, and underrepresented areas using innovative technologies including Artificial Intelligence and Robotics. We primarily focused on AI-assisted and robot-assisted healthcare, with specific focus on areas of improving safety, the community, and the environment through the latest technological advances in our respective fields. The day was kicked off by Raj Puri, Physician and Director of Strategic Health Initiatives & Innovation at Stanford University spoke about a novel, automated sentinel surveillance system his team built mitigating COVID and its integration into their public-facing dashboard of clinical data and metrics. Selected paper presentations during both days were wide ranging including talks from Oliver Bendel, a Professor from Switzerland and his Swiss colleague, Alina Gasser discussing co-robots in care and support, providing the latest information on technologies relating to human-robot interaction and communication. Yizheng Zhao, Associate Professor at Nanjing University and her colleagues from China discussed views of ontologies with applications to logical difference computation in the healthcare sector. Pooria Ghadiri from McGill University, Montreal, Canada discussed his research relating to AI enhancements in health-care delivery for adolescents with mental health problems in the primary care setting.

A review of machine learning applications in wildfire science and management Machine Learning

Artificial intelligence has been applied in wildfire science and management since the 1990s, with early applications including neural networks and expert systems. Since then the field has rapidly progressed congruently with the wide adoption of machine learning (ML) in the environmental sciences. Here, we present a scoping review of ML in wildfire science and management. Our objective is to improve awareness of ML among wildfire scientists and managers, as well as illustrate the challenging range of problems in wildfire science available to data scientists. We first present an overview of popular ML approaches used in wildfire science to date, and then review their use in wildfire science within six problem domains: 1) fuels characterization, fire detection, and mapping; 2) fire weather and climate change; 3) fire occurrence, susceptibility, and risk; 4) fire behavior prediction; 5) fire effects; and 6) fire management. We also discuss the advantages and limitations of various ML approaches and identify opportunities for future advances in wildfire science and management within a data science context. We identified 298 relevant publications, where the most frequently used ML methods included random forests, MaxEnt, artificial neural networks, decision trees, support vector machines, and genetic algorithms. There exists opportunities to apply more current ML methods (e.g., deep learning and agent based learning) in wildfire science. However, despite the ability of ML models to learn on their own, expertise in wildfire science is necessary to ensure realistic modelling of fire processes across multiple scales, while the complexity of some ML methods requires sophisticated knowledge for their application. Finally, we stress that the wildfire research and management community plays an active role in providing relevant, high quality data for use by practitioners of ML methods.

Abolish the #TechToPrisonPipeline


The authors of the Harrisburg University study make explicit their desire to provide "a significant advantage for law enforcement agencies and other intelligence agencies to prevent crime" as a co-author and former NYPD police officer outlined in the original press release.[38] At a time when the legitimacy of the carceral state, and policing in particular, is being challenged on fundamental grounds in the United States, there is high demand in law enforcement for research of this nature, research which erases historical violence and manufactures fear through the so-called prediction of criminality. Publishers and funding agencies serve a crucial role in feeding this ravenous maw by providing platforms and incentives for such research. The circulation of this work by a major publisher like Springer would represent a significant step towards the legitimation and application of repeatedly debunked, socially harmful research in the real world. To reiterate our demands, the review committee must publicly rescind the offer for publication of this specific study, along with an explanation of the criteria used to evaluate it. Springer must issue a statement condemning the use of criminal justice statistics to predict criminality and acknowledging their role in incentivizing such harmful scholarship in the past. Finally, all publishers must refrain from publishing similar studies in the future.

Explainable Artificial Intelligence: a Systematic Review Artificial Intelligence

This has led to the development of a plethora of domain-dependent and context-specific methods for dealing with the interpretation of machine learning (ML) models and the formation of explanations for humans. Unfortunately, this trend is far from being over, with an abundance of knowledge in the field which is scattered and needs organisation. The goal of this article is to systematically review research works in the field of XAI and to try to define some boundaries in the field. From several hundreds of research articles focused on the concept of explainability, about 350 have been considered for review by using the following search methodology. In a first phase, Google Scholar was queried to find papers related to "explainable artificial intelligence", "explainable machine learning" and "interpretable machine learning". Subsequently, the bibliographic section of these articles was thoroughly examined to retrieve further relevant scientific studies. The first noticeable thing, as shown in figure 2 (a), is the distribution of the publication dates of selected research articles: sporadic in the 70s and 80s, receiving preliminary attention in the 90s, showing raising interest in 2000 and becoming a recognised body of knowledge after 2010. The first research concerned the development of an explanation-based system and its integration in a computer program designed to help doctors make diagnoses [3]. Some of the more recent papers focus on work devoted to the clustering of methods for explainability, motivating the need for organising the XAI literature [4, 5, 6].

The AI Index 2021 Annual Report Artificial Intelligence

Welcome to the fourth edition of the AI Index Report. This year we significantly expanded the amount of data available in the report, worked with a broader set of external organizations to calibrate our data, and deepened our connections with the Stanford Institute for Human-Centered Artificial Intelligence (HAI). The AI Index Report tracks, collates, distills, and visualizes data related to artificial intelligence. Its mission is to provide unbiased, rigorously vetted, and globally sourced data for policymakers, researchers, executives, journalists, and the general public to develop intuitions about the complex field of AI. The report aims to be the most credible and authoritative source for data and insights about AI in the world.