Welcome to the second of our monthly digests, designed to keep you up-to-date with the happenings in the AI world. You can catch up with any AIhub stories you may have missed, get the low-down on recent conferences, and generally immerse yourself in all things AI. You may be aware that we are running a focus series on the UN sustainable development goals (SDG). Each month we tackle a different SDG and cover some of the AI research linked to that particular goal. In February it was the turn of climate action.
Konstantin Klemmer is a PhD student at the University of Warwick working at the intersection of machine learning and geographic data. He also serves as the Communications Chair for Climate Change AI. We talked about his research and the Climate Change AI organisation. Climate Change AI (CCAI) is a volunteer run organisation that catalyses impactful work at the intersection of climate change and machine learning by providing education and infrastructure, building a community, and advancing discourse. We also run a forum and regular community events like our fortnightly happy hour.
Abstract: Conceptual abstraction and analogy-making are key abilities underlying humans' abilities to learn, reason, and robustly adapt their knowledge to new domains. Despite of a long history of research on constructing AI systems with these abilities, no current AI system is anywhere close to a capability of forming humanlike abstractions or analogies. This paper reviews the advantages and limitations of several approaches toward this goal, including symbolic methods, deep learning, and probabilistic program induction. The paper concludes with several proposals for designing challenge tasks and evaluation measures in order to make quantifiable and generalizable progress in this area.
Researchers in EPFL's Digital and Cognitive Musicology Lab used an unsupervised machine learning model to "listen to" and categorize more than 13,000 pieces of Western classical music, revealing how modes – such as major and minor – have changed throughout history. Many people may not be able to define what a minor mode is in music, but most would almost certainly recognize a piece played in a minor key. That's because we intuitively differentiate the set of notes belonging to the minor scale – which tend to sound dark, tense, or sad – from those in the major scale, which more often connote happiness, strength, or lightness. But throughout history, there have been periods when multiple other modes were used in addition to major and minor – or when no clear separation between modes could be found at all. Understanding and visualizing these differences over time is what Digital and Cognitive Musicology Lab (DCML) researchers Daniel Harasim, Fabian Moss, Matthias Ramirez, and Martin Rohrmeier set out to do in a recent study, which has been published in the open-access journal Humanities and Social Sciences Communications.
The Bulgarian government has adopted a "Concept for the Development of Artificial Intelligence", planned until 2030. This strategy is in line with the documents of the European Commission, considering AI as one of the main drivers of digital transformation in Europe and a significant factor in ensuring the competitiveness of the European economy and high quality of life. Specific aspects of the European vision of "reliable AI" are included, namely that technological progress is accompanied by a legal and ethical framework to ensure the security and rights of citizens. The strategy also includes details on collecting accessible high-quality data, disseminating information and equal access to the benefits of AI technologies. In the concept document, an overview is given of the three main sectors involved in AI – sectors developing AI, sectors consuming AI, and sectors enabling the development and implementation of AI.
Eleni Vasilaki is Professor of Computational Neuroscience and Neural Engineering and Head of the Machine Learning Group in the Department of Computer Science, University of Sheffield. Eleni has extensive cross-disciplinary experience in understanding how brains learn, developing novel machine learning techniques and assisting in designing brain-like computation devices. In this interview, we talk about bio-inspired machine learning and artificial intelligence. I am interested in bio-inspired machine learning. I enjoy theory and analysis of mathematically tractable systems, particularly they can be relevant for neuromorphic computation.
Dr Amy McGovern leads the NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography (AI2ES), and is based at the University of Oklahoma. We spoke about her research, setting up the Institute, and some of the exciting projects and collaborations on the horizon. In terms of the Institute, we got funded to be one of the inaugural Institutes in September 2020 and our focus is on creating trustworthy AI with a focus on weather applications, climate applications and coastal oceanography. However, we are aiming for a broad set of applications so we named ourselves AI2ES to reflect environmental science (ES) generally. We're developing AI hand-in-hand with meteorologists, oceanographers, climate scientists, and risk communication specialists who are social scientists.
The field of artificial intelligence (AI) has created computers that can drive cars, synthesize chemical compounds, fold proteins and detect high-energy particles at a superhuman level. However, these AI algorithms cannot explain the thought processes behind their decisions. A computer that masters protein folding and also tells researchers more about the rules of biology is much more useful than a computer that folds proteins without explanation. Therefore, AI researchers like me are now turning our efforts toward developing AI algorithms that can explain themselves in a manner that humans can understand. If we can do this, I believe that AI will be able to uncover and teach people new facts about the world that have not yet been discovered, leading to new innovations.
Hosted by Dylan Doyle-Burke and Jessie J Smith, Radical AI is a podcast featuring the voices of the future in the field of artificial intelligence ethics. In this episode Jess and Dylan chat to Anna Lenhart about congress and the tech lobby. What should you know about anti-trust regulation nationally and internationally? How does the tech sector drive policy? Anna Lenhart is a researcher for technology policy and democracy at University of Maryland's iSchool Ethics & Values in Design Lab.
The Intergovernmental Panel on Climate Change (IPCC) fifth assessment report states that warming of the climate system is unequivocal and notes that each of the last three decades has been successively warmer at the Earth's surface than any preceding decade since 1850. The projections of the IPCC Report regarding future global temperature change range from 1.1 to 4 C, but that temperatures increases of more than 6 C cannot be ruled out . This wide range of values reflects our limitations in performing accurate projections of future climate change produced by different potential pathways of greenhouse gas (GHG) emissions. The sources of the uncertainty that prevent us from obtaining better precision are diverse. One of them is related to the computer models used to project future climate change.