Goto

Collaborating Authors

Results


Machine learning and AI seminars: a list of recent and forthcoming events

AIHub

Here you can find a list of the AI-related seminars that are scheduled to take place between now and the end of November 2020. We've also listed recent past seminars that are available for you to watch. All events detailed here are free and open for anyone to attend virtually. This list includes forthcoming seminars scheduled to take place between 15 October and 30 November. Doing more with less: deep Learning for physics at the Large Hadron Collider Speaker: Maurizio Pierini (CERN) Organised by: University of Oxford To receive the Zoom room link, send an empty email to: request.zoom.ox.ml.and.physics


AI Machine Learning Breakthrough Is a Twist on Brain Replay

#artificialintelligence

Recently, researchers affiliated with the Baylor College of Medicine, the University of Cambridge, the University of Massachusetts Amherst, and Rice University created a new way of adapting a neuroscience concept called "brain replay" to the digital realm of artificial neural networks to enable continuous learning. From a neuroscience perspective, the concept of brain replay is analogous to a streaming service that activates repeat showings from its vast archives of stored pre-recorded content. The brain can replay memories by reactivating the neural activity patterns that represent prior experiences, whether asleep or awake. This ability for memory replay starts in the hippocampus, then continues in the cortex. The research trio of Hava Siegelmann, Andreas Tolias, and Gido van de Ven published a study in Nature Communications on August 13, 2020, that shows state-of-the-art performance from neural networks by deploying a new twist on mimicking brain replay.


New AI Paradigm May Reduce a Heavy Carbon Footprint

#artificialintelligence

Artificial intelligence (AI) machine learning can have a considerable carbon footprint. Deep learning is inherently costly, as it requires massive computational and energy resources. Now researchers in the U.K. have discovered how to create an energy-efficient artificial neural network without sacrificing accuracy and published the findings in Nature Communications on August 26, 2020. The biological brain is the inspiration for neuromorphic computing--an interdisciplinary approach that draws upon neuroscience, physics, artificial intelligence, computer science, and electrical engineering to create artificial neural systems that mimic biological functions and systems. The human brain is a complex system of roughly 86 billion neurons, 200 billion neurons, and hundreds of trillions of synapses.


Artificial Intelligence in the Creative Industries: A Review

arXiv.org Artificial Intelligence

This paper reviews the current state of the art in Artificial Intelligence (AI) technologies and applications in the context of the creative industries. A brief background of AI, and specifically Machine Learning (ML) algorithms, is provided including Convolutional Neural Network (CNNs), Generative Adversarial Networks (GANs), Recurrent Neural Networks (RNNs) and Deep Reinforcement Learning (DRL). We categorise creative applications into five groups related to how AI technologies are used: i) content creation, ii) information analysis, iii) content enhancement and post production workflows, iv) information extraction and enhancement, and v) data compression. We critically examine the successes and limitations of this rapidly advancing technology in each of these areas. We further differentiate between the use of AI as a creative tool and its potential as a creator in its own right. We foresee that, in the near future, machine learning-based AI will be adopted widely as a tool or collaborative assistant for creativity. In contrast, we observe that the successes of machine learning in domains with fewer constraints, where AI is the `creator', remain modest. The potential of AI (or its developers) to win awards for its original creations in competition with human creatives is also limited, based on contemporary technologies. We therefore conclude that, in the context of creative industries, maximum benefit from AI will be derived where its focus is human centric -- where it is designed to augment, rather than replace, human creativity.


Unsupervised learning for vascular heterogeneity assessment of glioblastoma based on magnetic resonance imaging: The Hemodynamic Tissue Signature

arXiv.org Artificial Intelligence

This thesis focuses on the research and development of the Hemodynamic Tissue Signature (HTS) method: an unsupervised machine learning approach to describe the vascular heterogeneity of glioblastomas by means of perfusion MRI analysis. The HTS builds on the concept of habitats. An habitat is defined as a sub-region of the lesion with a particular MRI profile describing a specific physiological behavior. The HTS method delineates four habitats within the glioblastoma: the High Angiogenic Tumor (HAT) habitat, as the most perfused region of the enhancing tumor; the Low Angiogenic Tumor (LAT) habitat, as the region of the enhancing tumor with a lower angiogenic profile; the potentially Infiltrated Peripheral Edema (IPE) habitat, as the non-enhancing region adjacent to the tumor with elevated perfusion indexes; and the Vasogenic Peripheral Edema (VPE) habitat, as the remaining edema of the lesion with the lowest perfusion profile. The results of this thesis have been published in ten scientific contributions, including top-ranked journals and conferences in the areas of Medical Informatics, Statistics and Probability, Radiology & Nuclear Medicine, Machine Learning and Data Mining and Biomedical Engineering. An industrial patent registered in Spain (ES201431289A), Europe (EP3190542A1) and EEUU (US20170287133A1) was also issued, summarizing the efforts of the thesis to generate tangible assets besides the academic revenue obtained from research publications. Finally, the methods, technologies and original ideas conceived in this thesis led to the foundation of ONCOANALYTICS CDX, a company framed into the business model of companion diagnostics for pharmaceutical compounds, thought as a vehicle to facilitate the industrialization of the ONCOhabitats technology.


Point at the Triple: Generation of Text Summaries from Knowledge Base Triples

Journal of Artificial Intelligence Research

We investigate the problem of generating natural language summaries from knowledge base triples. Our approach is based on a pointer-generator network, which, in addition to generating regular words from a fixed target vocabulary, is able to verbalise triples in several ways. We undertake an automatic and a human evaluation on single and open-domain summaries generation tasks. Both show that our approach significantly outperforms other data-driven baselines.


Neural Machine Translation: A Review

Journal of Artificial Intelligence Research

The field of machine translation (MT), the automatic translation of written text from one natural language into another, has experienced a major paradigm shift in recent years. Statistical MT, which mainly relies on various count-based models and which used to dominate MT research for decades, has largely been superseded by neural machine translation (NMT), which tackles translation with a single neural network. In this work we will trace back the origins of modern NMT architectures to word and sentence embeddings and earlier examples of the encoder-decoder network family. We will conclude with a short survey of more recent trends in the field.


Regulation of Artificial Intelligence in Drug Discovery and Health Care

#artificialintelligence

It is going to be interesting to see how society deals with artificial intelligence, but it will definitely be cool. Artificial intelligence (AI) can be defined to mean the use of intelligent machines to replicate and augment the intelligence of human beings. The Turing test was propounded to show what factors determine whether a machine operates on artificial intelligence or not. AI applications are being used in various fields such as telecommunication, banking, agriculture, manufacturing, health care, and transportation. The implementation of AI in health care aims to enhance the lives of the patients and enable physicians, doctors, hospitals, and administrators to improve health care delivery in a cost-effective and time-efficient manner. The traditional drug industry is also experiencing a wave of change due to the implementation of AI-based processes in drug discovery and development. Substitution of AI technology-based solutions in place of the traditional methods for drug discovery is expected to reduce the time for drug development. Using AI in clinical trials has reduced the time required for drug trials from 4–6 months to three months. After the analysis of the genomic data from different patients, AI helps by selecting only those patients whose genetic profile suggests it will help them to undergo testing in the clinical trial.2 Machine learning technologies, deep learning algorithms, various neural networks (such as artificial neural networks or computational neural networks), and content screening are a few examples of AI that have brought radical changes to the process of drug discovery and development.


DeepMind and Oxford University researchers on how to 'decolonize' AI

Engadget

Sometimes it's tempting to think of every technological advancement as the brave first step on new shores, a fresh chance to shape the future rationally. In reality, every new tool enters the same old world with its same unresolved issues. In a moment where society is collectively reckoning with just how deep the roots of racism reach, a new paper from researchers at DeepMind -- the AI lab and sister company to Google -- and the University of Oxford presents a vision to "decolonize" artificial intelligence. The aim is to keep society's ugly prejudices from being reproduced and amplified by today's powerful machine learning systems. The paper, published this month in the journal Philosophy & Technology, has at heart the idea that you have to understand historical context to understand why technology can be biased.


COVI White Paper

arXiv.org Artificial Intelligence

The SARS-CoV-2 (Covid-19) pandemic has caused significant strain on public health institutions around the world. Contact tracing is an essential tool to change the course of the Covid-19 pandemic. Manual contact tracing of Covid-19 cases has significant challenges that limit the ability of public health authorities to minimize community infections. Personalized peer-to-peer contact tracing through the use of mobile apps has the potential to shift the paradigm. Some countries have deployed centralized tracking systems, but more privacy-protecting decentralized systems offer much of the same benefit without concentrating data in the hands of a state authority or for-profit corporations. Machine learning methods can circumvent some of the limitations of standard digital tracing by incorporating many clues and their uncertainty into a more graded and precise estimation of infection risk. The estimated risk can provide early risk awareness, personalized recommendations and relevant information to the user. Finally, non-identifying risk data can inform epidemiological models trained jointly with the machine learning predictor. These models can provide statistical evidence for the importance of factors involved in disease transmission. They can also be used to monitor, evaluate and optimize health policy and (de)confinement scenarios according to medical and economic productivity indicators. However, such a strategy based on mobile apps and machine learning should proactively mitigate potential ethical and privacy risks, which could have substantial impacts on society (not only impacts on health but also impacts such as stigmatization and abuse of personal data). Here, we present an overview of the rationale, design, ethical considerations and privacy strategy of `COVI,' a Covid-19 public peer-to-peer contact tracing and risk awareness mobile application developed in Canada.