We are excited to announce that results from our study in collaboration with Eli Lilly and Company was published today online in the leading journal in the field, #Gastroenterology. The study is the first to demonstrate that a deep learning #AI can be trained for automated disease severity scoring in patients with ulcerative colitis. This represents an opportunity to introduce machine reading of endoscopic videos into #IBD/ulcerative colitis clinical trials. Thank you to all our study authors!
Neo4j, the leader in graph technology, announced the latest version of Neo4j for Graph Data Science, a breakthrough that democratizes advanced graph-based machine learning (ML) techniques by leveraging deep learning and graph convolutional neural networks. Until now, few companies outside of Google and Facebook have had the AI foresight and resources to leverage graph embeddings. This powerful and innovative technique calculates the shape of the surrounding network for each piece of data inside of a graph, enabling far better machine learning predictions. Neo4j for Graph Data Science version 1.4 democratizes these innovations to upend the way enterprises make predictions in diverse scenarios from fraud detection to tracking customer or patient journey, to drug discovery and knowledge graph completion. Neo4j for Graph Data Science version 1.4 is the first and only graph-native machine learning functionality commercially available for enterprises.
With the advent of new deep learning approaches based on transformer architecture, natural language processing (NLP) techniques have undergone a revolution in performance and capabilities. Cutting-edge NLP models are becoming the core of modern search engines, voice assistants, chatbots, and more. Modern NLP models can synthesize human-like text and answer questions posed in natural language. As DeepMind research scientist Sebastian Ruder says, NLP's ImageNet moment has arrived. While NLP use has grown in mainstream use cases, it still is not widely adopted in healthcare, clinical applications, and scientific research.
Imagine that before you could make dinner, you first had to rebuild the kitchen, specifically designed for each recipe. You'd spend way more time on preparation, than actually cooking. For computational biologists, it's been a similar time-consuming process for analyzing genomics data. Before they can even begin their analysis, they spend a lot of valuable time formatting and preparing huge data sets to feed into deep learning models. To streamline this process, researchers from the Max Delbrueck Center for Molecular Medicine in the Helmholtz Association (MDC) developed a universal programming tool that converts a wide variety of genomics data into the required format for analysis by deep learning models.
Deep learning bears promise for drug discovery, including advanced image analysis, prediction of molecular structure and function, and automated generation of innovative chemical entities with bespoke properties. Despite the growing number of successful prospective applications, the underlying mathematical models often remain elusive to interpretation by the human mind. There is a demand for ‘explainable’ deep learning methods to address the need for a new narrative of the machine language of the molecular sciences. This Review summarizes the most prominent algorithmic concepts of explainable artificial intelligence, and forecasts future opportunities, potential applications as well as several remaining challenges. We also hope it encourages additional efforts towards the development and acceptance of explainable artificial intelligence techniques. Drug discovery has recently profited greatly from the use of deep learning models. However, these models can be notoriously hard to interpret. In this Review, Jiménez-Luna and colleagues summarize recent approaches to use explainable artificial intelligence techniques in drug discovery.
A genome is a genetic blueprint that determines an organism's characteristics. Deoxyribonucleic acid (DNA), and usually in the case of viruses, Ribonucleic acid (RNA) are the building blocks of genomic sequences. And manipulating these nucleic acids directly can lead to tangible changes in the organism. As such, developments in genetic engineering focus on our ability to manipulate genomic sequences. But this is a daunting task.
The State of AI Report 2020 is a comprehensive report on all things AI. Picking up from where we left off in summarizing key findings, we continue the conversation with authors Nathan Benaich and Ian Hogarth. Benaich is the founder of Air Street Capital and RAAIS, and Hogarth is an AI angel investor and a UCL IIPP visiting professor. Key themes we covered so far were AI democratization, industrialization, and the way to artificial general intelligence. We continue with healthcare and biology's AI moment, research and application breakthroughs, AI ethics, and predictions.
DNA and RNA have been compared to "instruction manuals" containing the information needed for living "machines" to operate. But while electronic machines like computers and robots are designed from the ground up to serve a specific purpose, biological organisms are governed by a much messier, more complex set of functions that lack the predictability of binary code. Inventing new solutions to biological problems requires teasing apart seemingly intractable variables--a task that is daunting to even the most intrepid human brains. Two teams of scientists from the Wyss Institute at Harvard University and the Massachusetts Institute of Technology have devised pathways around this roadblock by going beyond human brains; they developed a set of machine learning algorithms that can analyze reams of RNA-based "toehold" sequences and predict which ones will be most effective at sensing and responding to a desired target sequence. As reported in two papers published concurrently today in Nature Communications, the algorithms could be generalizable to other problems in synthetic biology as well, and could accelerate the development of biotechnology tools to improve science and medicine and help save lives.
Work by Wyss Core Faculty member Peng Yin in collaboration with Collins and others has demonstrated that different toehold switches can be combined to compute the presence of multiple "triggers," similar to a computer's logic board. DNA and RNA have been compared to "instruction manuals" containing the information needed for living "machines" to operate. But while electronic machines like computers and robots are designed from the ground up to serve a specific purpose, biological organisms are governed by a much messier, more complex set of functions that lack the predictability of binary code. Inventing new solutions to biological problems requires teasing apart seemingly intractable variables -- a task that is daunting to even the most intrepid human brains. Two teams of scientists from the Wyss Institute at Harvard University and the Massachusetts Institute of Technology have devised pathways around this roadblock by going beyond human brains; they developed a set of machine learning algorithms that can analyze reams of RNA-based "toehold" sequences and predict which ones will be most effective at sensing and responding to a desired target sequence.
You are free to share this article under the Attribution 4.0 International license. A new deep learning-based tool called Metabolic Translator may soon give researchers a better handle on how drugs in development will perform in the human body. When you take a medication, you want to know precisely what it does. Pharmaceutical companies go through extensive testing to ensure that you do. Metabolic Translator, a computational tool that predicts metabolites, the products of interactions between small molecules like drugs and enzymes could help improve the process. The new tool takes advantage of deep-learning methods and the availability of massive reaction datasets to give developers a broad picture of what a drug will do.