Posted on July 27th, 2021 by Dr. Francis Collins Proteins are the workhorses of the cell. Mapping the precise shapes of the most important of these workhorses helps to unlock their life-supporting functions or, in the case of disease, potential for dysfunction. While the amino acid sequence of a protein provides the basis for its 3D structure, deducing the atom-by-atom map from principles of quantum mechanics has been beyond the ability of computer programs--until now. In a recent study in the journal Science, researchers reported they have developed artificial intelligence approaches for predicting the three-dimensional structure of proteins in record time, based solely on their one-dimensional amino acid sequences . This groundbreaking approach will not only aid researchers in the lab, but guide drug developers in coming up with safer and more effective ways to treat and prevent disease.
Existing experiment trackers come with a high setup cost. To get one working, you usually have to spin up a database and run a web application. After trying multiple options, I thought that using Jupyter notebooks could be an excellent choice to store experiment results and retrieve them for comparison. This post explains how I use .ipynb Machine Learning is a highly iterative process: you don't know in advance what combination of model, features, and hyperparameters will work best, so you need to make slight tweaks and evaluate performance.
Millions of dollars are being spent to develop artificial intelligence software that reads x-rays and other medical scans in hopes it can spot things doctors look for but sometimes miss, such as lung cancers. A new study reports that these algorithms can also see something doctors don't look for on such scans: a patient's race. The study authors and other medical AI experts say the results make it more crucial than ever to check that health algorithms perform fairly on people with different racial identities. Complicating that task: The authors themselves aren't sure what cues the algorithms they created use to predict a person's race. Evidence that algorithms can read race from a person's medical scans emerged from tests on five types of imagery used in radiology research, including chest and hand x-rays and mammograms.
To develop and validate an automated morphometric analysis framework for the quantitative analysis of geometric hip joint parameters in MR images from the German National Cohort (GNC) study. A secondary analysis on 40 participants (mean age, 51 years; age range, 30–67 years; 25 women) from the prospective GNC MRI study (2015–2016) was performed. Based on a proton density–weighted three-dimensional fast spin-echo sequence, a morphometric analysis approach was developed, including deep learning based landmark localization, bone segmentation of the femora and pelvis, and a shape model for annotation transfer. The centrum-collum-diaphyseal, center-edge (CE), three alpha angles, head-neck offset (HNO), and HNO ratio along with the acetabular depth, inclination, and anteversion were derived. Quantitative validation was provided by comparison with average manual assessments of radiologists in a cross-validation format. High agreement in mean Dice similarity coefficients was achieved (average of 97.52% 0.46 [standard deviation]). The subsequent morphometric analysis produced results with low mean MAD values, with the highest values of 3.34 (alpha 03:00 o'clock position) and 0.87 mm (HNO) and ICC values ranging between 0.288 (HNO ratio) and 0.858 (CE) compared with manual assessments. These values were in line with interreader agreements, which at most had MAD values of 4.02 (alpha 12:00 o'clock position) and 1.07 mm (HNO) and ICC values ranging between 0.218 (HNO ratio) and 0.777 (CE). Automatic extraction of geometric hip parameters from MRI is feasible using a morphometric analysis approach with deep learning.
Here, in two fMRI experiments, we demonstrate a mechanism of abstraction built upon the valuation of sensory features. Human volunteers learned novel association rules based on simple visual features. Reinforcement-learning algorithms revealed that, with learning, high-value abstract representations increasingly guided participant behaviour, resulting in better choices and higher subjective confidence. We also found that the brain area computing value signals – the ventromedial prefrontal cortex – prioritised and selected latent task elements during abstraction, both locally and through its connection to the visual cortex. Such a coding scheme predicts a causal role for valuation. Hence, in a second experiment, we used multivoxel neural reinforcement to test for the causality of feature valuation in the sensory cortex, as a mechanism of abstraction.
When you deploy intelligent search in your organization, two important factors to consider are access to the latest and most comprehensive information, and a contextual discovery mechanism. Many companies are still struggling to make their internal documents searchable in a way that allows employees to get relevant information knowledge in a scalable, cost-effective manner. A 2018 International Data Corporation (IDC) study found that data professionals are losing 50% of their time every week--30% searching for, governing, and preparing data, plus 20% duplicating work. Amazon Kendra is purpose-built for addressing these challenges. Amazon Kendra is an intelligent search service that uses deep learning and reading comprehension to deliver more accurate search results.
All the sessions from Transform 2021 are available on-demand now. This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence. One of the key challenges of deep reinforcement learning models -- the kind of AI systems that have mastered Go, StarCraft 2, and other games -- is their inability to generalize their capabilities beyond their training domain. This limit makes it very hard to apply these systems to real-world settings, where situations are much more complicated and unpredictable than the environments where AI models are trained. But scientists at AI research lab DeepMind claim to have taken the "first steps to train an agent capable of playing many different games without needing human interaction data," according to a blog post about their new "open-ended learning" initiative. Their new project includes a 3D environment with realistic dynamics and deep reinforcement learning agents that can learn to solve a wide range of challenges.
Isaac Newton may have met his match. For centuries, engineers have relied on physical laws -- developed by Newton and others -- to understand the stresses and strains on the materials they work with. But solving those equations can be a computational slog, especially for complex materials. MIT researchers have developed a technique to quickly determine certain properties of a material, like stress and strain, based on an image of the material showing its internal structure. The approach could one day eliminate the need for arduous physics-based calculations, instead relying on computer vision and machine learning to generate estimates in real time.
A small company developing an implantable brain computer interface to help treat conditions like paralysis has received the go-ahead from the Food and Drug Administration (FDA) to kick off clinical trials of its flagship device later this year. New York-based Synchron announced Wednesday it has received FDA approval to begin an early feasibility study of its Stentrode implant later this year at Mount Sinai Hospital with six human subjects. The study will examine the safety and efficacy of its motor neuroprosthesis in patients with severe paralysis, with the hopes the device will allow them to use brain data to "control digital devices and achieve improvements in functional independence." "Patients begin using the device at home soon after implantation and may wirelessly control external devices by thinking about moving their limbs. The system is designed to facilitate better communication and functional independence for patients by enabling daily tasks like texting, emailing, online commerce and accessing telemedicine," the company said in a release.
The transformer architecture has shown an uncanny ability to model not only language but also images and proteins. New research found that it can apply what it learns from the first domain to the others. What's new: Kevin Lu and colleagues at UC Berkeley, Facebook, and Google devised Frozen Pretrained Transformer (FPT). After pretraining a transformer network on language data, they showed that it could perform vision, mathematical, and logical tasks without fine-tuning its core layers. Key insight: Transformers pick up on patterns in an input sequence, be it words in a novel, pixels in an image, or amino acids in a protein.