Results


Love in the time of algorithms: would you let your artificial intelligence choose your partner?

#artificialintelligence

It could be argued artificial intelligence (AI) is already the indispensable tool of the 21st century. From helping doctors diagnose and treat patients to rapidly advancing new drug discoveries, it's our trusted partner in so many ways. Now it has found its way into the once exclusively-human domain of love and relationships. With AI-systems as matchmakers, in the coming decades it may become common to date a personalised avatar. This was explored in the 2014 movie "Her", in which a writer living in near-future Los Angeles develops affection for an AI system. The sci-fi film won an Academy Award for depicting what seemed like a highly unconventional love story.


AlphaFold2 @ CASP14: "It feels like one's child has left home."

#artificialintelligence

The past week was a momentous occasion for protein structure prediction, structural biology at large, and in due time, may prove to be so for the whole of life sciences. CASP14, the conference for the biennial competition for the prediction of protein structure from sequence, took place virtually over multiple remote working platforms. DeepMind, Google's premier AI research group, entered the competition as they did the previous time, when they upended expectations of what an industrial research lab can do. The outcome this time was very, very different however. At CASP13 DeepMind made an impressive showing with AlphaFold but was ultimately within the bounds of the usual expectations of academic progress, albeit at an accelerated rate. At CASP14 DeepMind produced an advance so thorough it compelled CASP organizers to declare the protein structure prediction problem for single protein chains to be solved. In my read of most CASP14 attendees (virtual as it was), I sense that this was ...


DeepMind's AlphaFold 2 Explained! AI Breakthrough in Protein Folding! What we know (& what we don't)

#artificialintelligence

They are large complex molecules, made up of chains of amino acids, and what a protein does largely depends on its unique 3D structure. Figuring out what shapes proteins fold into is known as the "protein folding problem", and has stood as a grand challenge in biology for the past 50 years. In a major scientific advance, the latest version of our AI system AlphaFold has been recognised as a solution to this grand challenge by the organisers of the biennial Critical Assessment of protein Structure Prediction (CASP). This breakthrough demonstrates the impact AI can have on scientific discovery and its potential to dramatically accelerate progress in some of the most fundamental fields that explain and shape our world.


Distributed Bandits: Probabilistic Communication on $d$-regular Graphs

arXiv.org Machine Learning

We study the decentralized multi-agent multi-armed bandit problem for agents that communicate with probability over a network defined by a $d$-regular graph. Every edge in the graph has probabilistic weight $p$ to account for the ($1\!-\!p$) probability of a communication link failure. At each time step, each agent chooses an arm and receives a numerical reward associated with the chosen arm. After each choice, each agent observes the last obtained reward of each of its neighbors with probability $p$. We propose a new Upper Confidence Bound (UCB) based algorithm and analyze how agent-based strategies contribute to minimizing group regret in this probabilistic communication setting. We provide theoretical guarantees that our algorithm outperforms state-of-the-art algorithms. We illustrate our results and validate the theoretical claims using numerical simulations.


A Large-Scale Database for Graph Representation Learning

arXiv.org Artificial Intelligence

With the rapid emergence of graph representation learning, the construction of new large-scale datasets are necessary to distinguish model capabilities and accurately assess the strengths and weaknesses of each technique. By carefully analyzing existing graph databases, we identify 3 critical components important for advancing the field of graph representation learning: (1) large graphs, (2) many graphs, and (3) class diversity. To date, no single graph database offers all of these desired properties. We introduce MalNet, the largest public graph database ever constructed, representing a large-scale ontology of software function call graphs. MalNet contains over 1.2 million graphs, averaging over 17k nodes and 39k edges per graph, across a hierarchy of 47 types and 696 families. Compared to the popular REDDIT-12K database, MalNet offers 105x more graphs, 44x larger graphs on average, and 63x the classes. We provide a detailed analysis of MalNet, discussing its properties and provenance. The unprecedented scale and diversity of MalNet offers exciting opportunities to advance the frontiers of graph representation learning---enabling new discoveries and research into imbalanced classification, explainability and the impact of class hardness. The database is publically available at www.mal-net.org.


AI at skin rescue

#artificialintelligence

Even better than a visit to the dermatologist, PROVEN's three-minute Skin Genome Quiz (check it out!) asks specific questions about your genetic background, your sleep schedule, how much water you drink, and more. Then, taking it a step further, it considers the humidity levels, air quality, and water hardness of your zip code, among other relevant details. All of this insight is then funneled into an algorithm that determines the most effective ingredients (and precise concentrations necessary) for your best face forward. Where does it source all this information? Complete with over 4,000 scientific publications, 20,000 ingredients, 100,000 individual skincare products, and 8 million customer reviews, it was developed by PROVEN co-founder Dr. Yuan herself--and managed to win MIT's Artificial Intelligence Award in 2018.


Rediscovering alignment relations with Graph Convolutional Networks

arXiv.org Artificial Intelligence

Knowledge graphs are concurrently published and edited in the Web of data. Hence they may overlap, which makes key the task that consists in matching their content. This task encompasses the identification, within and across knowledge graphs, of nodes that are equivalent, more specific, or weakly related. In this article, we propose to match nodes of a knowledge graph by (i) learning node embeddings with Graph Convolutional Networks such that similar nodes have low distances in the embedding space, and (ii) clustering nodes based on their embeddings. We experimented this approach on a biomedical knowledge graph and particularly investigated the interplay between formal semantics and GCN models with the two following main focuses. Firstly, we applied various inference rules associated with domain knowledge, independently or combined, before learning node embeddings, and we measured the improvements in matching results. Secondly, while our GCN model is agnostic to the exact alignment relations (e.g., equivalence, weak similarity), we observed that distances in the embedding space are coherent with the "strength" of these different relations (e.g., smaller distances for equivalences), somehow corresponding to their rediscovery by the model.


Adaptive Learning of Rank-One Models for Efficient Pairwise Sequence Alignment

arXiv.org Machine Learning

Pairwise alignment of DNA sequencing data is a ubiquitous task in bioinformatics and typically represents a heavy computational burden. State-of-the-art approaches to speed up this task use hashing to identify short segments (k-mers) that are shared by pairs of reads, which can then be used to estimate alignment scores. However, when the number of reads is large, accurately estimating alignment scores for all pairs is still very costly. Moreover, in practice, one is only interested in identifying pairs of reads with large alignment scores. In this work, we propose a new approach to pairwise alignment estimation based on two key new ingredients. The first ingredient is to cast the problem of pairwise alignment estimation under a general framework of rank-one crowdsourcing models, where the workers' responses correspond to k-mer hash collisions. These models can be accurately solved via a spectral decomposition of the response matrix. The second ingredient is to utilise a multi-armed bandit algorithm to adaptively refine this spectral estimator only for read pairs that are likely to have large alignments. The resulting algorithm iteratively performs a spectral decomposition of the response matrix for adaptively chosen subsets of the read pairs.


Generating Knowledge Graphs by Employing Natural Language Processing and Machine Learning Techniques within the Scholarly Domain

arXiv.org Artificial Intelligence

The continuous growth of scientific literature brings innovations and, at the same time, raises new challenges. One of them is related to the fact that its analysis has become difficult due to the high volume of published papers for which manual effort for annotations and management is required. Novel technological infrastructures are needed to help researchers, research policy makers, and companies to time-efficiently browse, analyse, and forecast scientific research. Knowledge graphs i.e., large networks of entities and relationships, have proved to be effective solution in this space. Scientific knowledge graphs focus on the scholarly domain and typically contain metadata describing research publications such as authors, venues, organizations, research topics, and citations. However, the current generation of knowledge graphs lacks of an explicit representation of the knowledge presented in the research papers. As such, in this paper, we present a new architecture that takes advantage of Natural Language Processing and Machine Learning methods for extracting entities and relationships from research publications and integrates them in a large-scale knowledge graph. Within this research work, we i) tackle the challenge of knowledge extraction by employing several state-of-the-art Natural Language Processing and Text Mining tools, ii) describe an approach for integrating entities and relationships generated by these tools, iii) show the advantage of such an hybrid system over alternative approaches, and vi) as a chosen use case, we generated a scientific knowledge graph including 109,105 triples, extracted from 26,827 abstracts of papers within the Semantic Web domain. As our approach is general and can be applied to any domain, we expect that it can facilitate the management, analysis, dissemination, and processing of scientific knowledge.


Welcome! You are invited to join a webinar: New Trends in Drug Discovery : Robotics and AI. After registering, you will receive a confirmation email about joining the webinar.

#artificialintelligence

The drug discovery ecosystem is changing rapidly. The rise of robotics and AI enables the emergence of a new model of data-driven drug discovery. Bringing together recent advances in life sciences automation and machine learning applications for drug discovery, new partnerships evolve that allow for game-changing improvements in the drug discovery process. The webinar will provide an overview on large-scale data and metadata capture enabled by end-to-end automation, going beyond what is currently possible in traditional wet lab operations, and will present case studies showing the impact on biotech and pharma operations, providing actionable insights for biopharma leaders. Disclaimer Regarding Audio/Video Recording: a) By participating in this Webinar, you will be participating in an event where photography, video and audio recording may occur. b) By participating in this webinar, you consent to interview(s), photography, audio recording, video recording and its/their release, publication, exhibition, or reproduction to be used for news, web casts, promotional purposes, telecasts, advertising, inclusion on web sites, or for any other purpose(s) that Invitrocue, its vendors, partners, affiliates and/or representatives deems fit to use. You release Invitrocue, its employees, and each and all persons involved from any liability connected with the taking, recording, digitising, or publication of interviews, photographs, computer images, video and/or or sound recordings.