Goto

Collaborating Authors

AIHub


Machine learning helps retrace evolution of classical music

AIHub

Researchers in EPFL's Digital and Cognitive Musicology Lab used an unsupervised machine learning model to "listen to" and categorize more than 13,000 pieces of Western classical music, revealing how modes – such as major and minor – have changed throughout history. Many people may not be able to define what a minor mode is in music, but most would almost certainly recognize a piece played in a minor key. That's because we intuitively differentiate the set of notes belonging to the minor scale – which tend to sound dark, tense, or sad – from those in the major scale, which more often connote happiness, strength, or lightness. But throughout history, there have been periods when multiple other modes were used in addition to major and minor – or when no clear separation between modes could be found at all. Understanding and visualizing these differences over time is what Digital and Cognitive Musicology Lab (DCML) researchers Daniel Harasim, Fabian Moss, Matthias Ramirez, and Martin Rohrmeier set out to do in a recent study, which has been published in the open-access journal Humanities and Social Sciences Communications.


Bulgarian government adopts a new strategy for the development of AI

AIHub

The Bulgarian government has adopted a "Concept for the Development of Artificial Intelligence", planned until 2030. This strategy is in line with the documents of the European Commission, considering AI as one of the main drivers of digital transformation in Europe and a significant factor in ensuring the competitiveness of the European economy and high quality of life. Specific aspects of the European vision of "reliable AI" are included, namely that technological progress is accompanied by a legal and ethical framework to ensure the security and rights of citizens. The strategy also includes details on collecting accessible high-quality data, disseminating information and equal access to the benefits of AI technologies. In the concept document, an overview is given of the three main sectors involved in AI – sectors developing AI, sectors consuming AI, and sectors enabling the development and implementation of AI.


Interview with Eleni Vasilaki – talking bio-inspired machine learning

AIHub

Eleni Vasilaki is Professor of Computational Neuroscience and Neural Engineering and Head of the Machine Learning Group in the Department of Computer Science, University of Sheffield. Eleni has extensive cross-disciplinary experience in understanding how brains learn, developing novel machine learning techniques and assisting in designing brain-like computation devices. In this interview, we talk about bio-inspired machine learning and artificial intelligence. I am interested in bio-inspired machine learning. I enjoy theory and analysis of mathematically tractable systems, particularly they can be relevant for neuromorphic computation.


Interview with Amy McGovern – creating trustworthy AI for environmental science applications

AIHub

Dr Amy McGovern leads the NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography (AI2ES), and is based at the University of Oklahoma. We spoke about her research, setting up the Institute, and some of the exciting projects and collaborations on the horizon. In terms of the Institute, we got funded to be one of the inaugural Institutes in September 2020 and our focus is on creating trustworthy AI with a focus on weather applications, climate applications and coastal oceanography. However, we are aiming for a broad set of applications so we named ourselves AI2ES to reflect environmental science (ES) generally. We're developing AI hand-in-hand with meteorologists, oceanographers, climate scientists, and risk communication specialists who are social scientists.


How explainable artificial intelligence can help humans innovate

AIHub

The field of artificial intelligence (AI) has created computers that can drive cars, synthesize chemical compounds, fold proteins and detect high-energy particles at a superhuman level. However, these AI algorithms cannot explain the thought processes behind their decisions. A computer that masters protein folding and also tells researchers more about the rules of biology is much more useful than a computer that folds proteins without explanation. Therefore, AI researchers like me are now turning our efforts toward developing AI algorithms that can explain themselves in a manner that humans can understand. If we can do this, I believe that AI will be able to uncover and teach people new facts about the world that have not yet been discovered, leading to new innovations.


Radical AI podcast: featuring Anna Lenhart

AIHub

Hosted by Dylan Doyle-Burke and Jessie J Smith, Radical AI is a podcast featuring the voices of the future in the field of artificial intelligence ethics. In this episode Jess and Dylan chat to Anna Lenhart about congress and the tech lobby. What should you know about anti-trust regulation nationally and internationally? How does the tech sector drive policy? Anna Lenhart is a researcher for technology policy and democracy at University of Maryland's iSchool Ethics & Values in Design Lab.


Helping decision-makers manage resilience under different climate change scenarios: global vs local

AIHub

The Intergovernmental Panel on Climate Change (IPCC) fifth assessment report states that warming of the climate system is unequivocal and notes that each of the last three decades has been successively warmer at the Earth's surface than any preceding decade since 1850. The projections of the IPCC Report regarding future global temperature change range from 1.1 to 4 C, but that temperatures increases of more than 6 C cannot be ruled out [1]. This wide range of values reflects our limitations in performing accurate projections of future climate change produced by different potential pathways of greenhouse gas (GHG) emissions. The sources of the uncertainty that prevent us from obtaining better precision are diverse. One of them is related to the computer models used to project future climate change.


Does GPT-2 know your phone number?

AIHub

Yet, OpenAI's GPT-2 language model does know how to reach a certain Peter W-- (name redacted for privacy). When prompted with a short snippet of Internet text, the model accurately generates Peter's contact information, including his work address, email, phone, and fax: In our recent paper, we evaluate how large language models memorize and regurgitate such rare snippets of their training data. We focus on GPT-2 and find that at least 0.1% of its text generations (a very conservative estimate) contain long verbatim strings that are "copy-pasted" from a document in its training set. Such memorization would be an obvious issue for language models that are trained on private data, e.g., on users' emails, as the model might inadvertently output a user's sensitive conversations. Regular readers of the BAIR blog may be familiar with the issue of data memorization in language models.


Physics-constrained deep learning of building thermal dynamics

AIHub

Energy-efficient buildings are one of the top priorities to sustainably address the global energy demands and reduction of CO2 emissions. Advanced control strategies for buildings have been identified as a potential solution with projected energy saving potential of up to 28%. However, the main bottleneck of the model-free methods such as reinforcement learning (RL) is the sampling inefficiency and thus requirement for large datasets, which are costly to obtain or often not available in the engineering practice. On the other hand, model-based methods such as model predictive control (MPC) suffer from large cost associated with the development of the physics-based building thermal dynamics model. We address the challenge of developing cost and data-efficient predictive models of a building's thermal dynamics via physics-constrained deep learning.


Researchers use deep learning to identify gene regulation at single-cell level

AIHub

Scientists at the University of California, Irvine have developed a new deep-learning framework that predicts gene regulation at the single-cell level. In a study published recently in Science Advances, UCI researchers describe how their deep-learning technique can also be successfully used to observe gene regulation at the cellular level. Until now, that process had been limited to tissue-level analysis. According to co-author Xiaohui Xie, UCI professor of computer science, the framework enables the study of transcription factor binding at the cellular level, which was previously impossible due to the intrinsic noise and sparsity of single-cell data. A transcription factor (TF) is a protein that controls the translation of genetic information from DNA to RNA; TFs regulate genes to ensure they're expressed in proper sequence and at the right time in cells.