N. Jean, M. Burke, M. Xie, W. M. Davis, D. B. Lobell, and S. Ermon, "Combining satellite imagery and machine learning to predict poverty," Science 353, 790–794 (2016). B. Forster, D. Van De Ville, J. Berent, D. Sage, and M. Unser, "Complex wavelets for extended depth-of-field: a new method for the fusion of multichannel microscopy images," Microsc.
This article provides an overview of evolutionary robotics research where evolution takes place in a population of robots in a continuous manner. Ficici et al. (1999) coined the phrase embodied evolution for evolutionary processes that are distributed over the robots in the population to allow them to adapt autonomously and continuously. As robotics technology becomes simultaneously more capable and economically viable, individual robots operated at large expense by teams of experts are increasingly supplemented by collectives of robots used cooperatively under minimal human supervision (Bellingham and Rajan, 2007), and embodied evolution can play a crucial role in enabling autonomous online adaptivity in such robot collectives.
We used crowdsourcing (CS) to examine how COVID-19 lockdown affects the content of dreams and nightmares. The CS took place on the sixth week of the lockdown. Over the course of 1 week, 4,275 respondents (mean age 43, SD = 14 years) assessed their sleep, and 811 reported their dream content. Overall, respondents slept substantially more (54.2%) but reported an average increase of awakenings (28.6%) and nightmares (26%) from the pre-pandemic situation. We transcribed the content of the dreams into word lists and performed unsupervised computational network and cluster analysis of word associations, which suggested 33 dream clusters including 20 bad dream clusters, of which 55% were pandemic-specific (e.g., Disease Management, Disregard of Distancing, Elderly in Trouble). The dream-association networks were more accentuated for those who reported an increase in perceived stress. This CS survey on dream-association networks and pandemic stress introduces novel, collectively shared COVID-19 bad dream contents.
Autonomous Vehicles (AVs) are increasingly embraced around the world to advance smart mobility and more broadly, smart, and sustainable cities. Algorithms form the basis of decision-making in AVs, allowing them to perform driving tasks autonomously, efficiently, and more safely than human drivers and offering various economic, social, and environmental benefits. However, algorithmic decision-making in AVs can also introduce new issues that create new safety risks and perpetuate discrimination. We identify bias, ethics, and perverse incentives as key ethical issues in the AV algorithms' decision-making that can create new safety risks and discriminatory outcomes. Technical issues in the AVs' perception, decision-making and control algorithms, limitations of existing AV testing and verification methods, and cybersecurity vulnerabilities can also undermine the performance of the AV system. This article investigates the ethical and technical concerns surrounding algorithmic decision-making in AVs by exploring how driving decisions can perpetuate discrimination and create new safety risks for the public. We discuss steps taken to address these issues, highlight the existing research gaps and the need to mitigate these issues through the design of AV's algorithms and of policies and regulations to fully realise AVs' benefits for smart and sustainable cities.