To develop and validate an automated morphometric analysis framework for the quantitative analysis of geometric hip joint parameters in MR images from the German National Cohort (GNC) study. A secondary analysis on 40 participants (mean age, 51 years; age range, 30–67 years; 25 women) from the prospective GNC MRI study (2015–2016) was performed. Based on a proton density–weighted three-dimensional fast spin-echo sequence, a morphometric analysis approach was developed, including deep learning based landmark localization, bone segmentation of the femora and pelvis, and a shape model for annotation transfer. The centrum-collum-diaphyseal, center-edge (CE), three alpha angles, head-neck offset (HNO), and HNO ratio along with the acetabular depth, inclination, and anteversion were derived. Quantitative validation was provided by comparison with average manual assessments of radiologists in a cross-validation format. High agreement in mean Dice similarity coefficients was achieved (average of 97.52% 0.46 [standard deviation]). The subsequent morphometric analysis produced results with low mean MAD values, with the highest values of 3.34 (alpha 03:00 o'clock position) and 0.87 mm (HNO) and ICC values ranging between 0.288 (HNO ratio) and 0.858 (CE) compared with manual assessments. These values were in line with interreader agreements, which at most had MAD values of 4.02 (alpha 12:00 o'clock position) and 1.07 mm (HNO) and ICC values ranging between 0.218 (HNO ratio) and 0.777 (CE). Automatic extraction of geometric hip parameters from MRI is feasible using a morphometric analysis approach with deep learning.
All the sessions from Transform 2021 are available on-demand now. This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence. One of the key challenges of deep reinforcement learning models -- the kind of AI systems that have mastered Go, StarCraft 2, and other games -- is their inability to generalize their capabilities beyond their training domain. This limit makes it very hard to apply these systems to real-world settings, where situations are much more complicated and unpredictable than the environments where AI models are trained. But scientists at AI research lab DeepMind claim to have taken the "first steps to train an agent capable of playing many different games without needing human interaction data," according to a blog post about their new "open-ended learning" initiative. Their new project includes a 3D environment with realistic dynamics and deep reinforcement learning agents that can learn to solve a wide range of challenges.
Isaac Newton may have met his match. For centuries, engineers have relied on physical laws -- developed by Newton and others -- to understand the stresses and strains on the materials they work with. But solving those equations can be a computational slog, especially for complex materials. MIT researchers have developed a technique to quickly determine certain properties of a material, like stress and strain, based on an image of the material showing its internal structure. The approach could one day eliminate the need for arduous physics-based calculations, instead relying on computer vision and machine learning to generate estimates in real time.
A small company developing an implantable brain computer interface to help treat conditions like paralysis has received the go-ahead from the Food and Drug Administration (FDA) to kick off clinical trials of its flagship device later this year. New York-based Synchron announced Wednesday it has received FDA approval to begin an early feasibility study of its Stentrode implant later this year at Mount Sinai Hospital with six human subjects. The study will examine the safety and efficacy of its motor neuroprosthesis in patients with severe paralysis, with the hopes the device will allow them to use brain data to "control digital devices and achieve improvements in functional independence." "Patients begin using the device at home soon after implantation and may wirelessly control external devices by thinking about moving their limbs. The system is designed to facilitate better communication and functional independence for patients by enabling daily tasks like texting, emailing, online commerce and accessing telemedicine," the company said in a release.
The transformer architecture has shown an uncanny ability to model not only language but also images and proteins. New research found that it can apply what it learns from the first domain to the others. What's new: Kevin Lu and colleagues at UC Berkeley, Facebook, and Google devised Frozen Pretrained Transformer (FPT). After pretraining a transformer network on language data, they showed that it could perform vision, mathematical, and logical tasks without fine-tuning its core layers. Key insight: Transformers pick up on patterns in an input sequence, be it words in a novel, pixels in an image, or amino acids in a protein.
Get the day's biggest stories sent direct to your inbox so you never miss a thing Artificial Intelligence (AI) has become a staple of our daily lives, from Siri to Google Assistant which can control our phones, computers and even homes. The world of media has explored the advancement and potential dangers of rapidly advancing AI for decades, films such as Blade Runner and 2001: A Space Odyssey have touched on the themes of what happens when AI grows beyond human control. But how much does this affect our perception of AI and its involvement in our daily lives? Using data from Google Search Trends and Linkfluence, new research from Ebuyer has revealed that globally almost 3 million people had searched negative themes around AI online. The research discovered that the biggest search queries included "Can artificial intelligence be dangerous?"
Researchers are using artificial intelligence (AI) techniques to calibrate some of NASA's images of the Sun. Launched in 2010, NASA's Solar Dynamics Observatory (SDO) has provided high-definition images of the Sun for over a decade. The Atmospheric Imagery Assembly, or AIA, is one of two imaging instruments on SDO and looks constantly at the Sun, taking images across 10 wavelengths of ultraviolet light every 12 seconds. This creates a wealth of information of the Sun like no other, but like all Sun-staring instruments--AIA degrades over time, and the data needs to be frequently calibrated, NASA said in a statement. To overcome this challenge, scientists decided to look at other options to calibrate the instrument, with an eye towards constant calibration.
This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence. One of the key challenges of deep reinforcement learning models--the kind of AI systems that have mastered Go, StarCraft 2, and other games--is their inability to generalize their capabilities beyond their training domain. This limit makes it very hard to apply these systems to real-world settings, where situations are much more complicated and unpredictable than the environments where AI models are trained. But scientists at AI research lab DeepMind claim to have taken the "first steps to train an agent capable of playing many different games without needing human interaction data," according to a blog post about their new "open-ended learning" initiative. Their new project includes a 3D environment with realistic dynamics and deep reinforcement learning agents that can learn to solve a wide range of challenges. The new system, according to DeepMind's AI researchers, is an "important step toward creating more general agents with the flexibility to adapt rapidly within constantly changing environments."
Why does Covid-19 present itself more severe in some patients but not in others? The question has puzzled researchers and clinicians since the start of the pandemic, but now new research from the EPFL Blue Brain Project may have found a major clue to solving the mystery thanks to machine learning. Analyzing data extracted from 240,000 open access scientific papers, the findings of a paper published in Frontiers revealed the previously undiscovered roles elevated blood glucose levels have in the severity of Covid-19. What makes one person more at risk of developing severe Covid-19 than someone else? While it is widely accepted that elderly people are the most at-risk during the current pandemic, many young, seemingly healthy people have also been hospitalized by the disease.
Astronomers have designed and trained a computer program that can classify tens of thousands of galaxies in just a few seconds, a task that usually takes months to accomplish. In research published today, astrophysicists from Australia have used machine learning to speed up a process that is often done manually by astronomers and citizen scientists around the world. "Galaxies come in different shapes and sizes," said lead author Mitchell Cavanagh, a Ph.D. candidate based at the University of Western Australia node of the International Centre for Radio Astronomy Research (ICRAR). "Classifying the shapes of galaxies is an important step in understanding their formation and evolution, and can even shed light on the nature of the Universe itself." Cavanagh said that with larger surveys of the sky happening all the time, astronomers are collecting too many galaxies to look at and classify on their own.