Goto

Collaborating Authors

Results


Automated Morphometric Analysis of the Hip Joint on MRI from the German National Cohort Study

#artificialintelligence

To develop and validate an automated morphometric analysis framework for the quantitative analysis of geometric hip joint parameters in MR images from the German National Cohort (GNC) study. A secondary analysis on 40 participants (mean age, 51 years; age range, 30–67 years; 25 women) from the prospective GNC MRI study (2015–2016) was performed. Based on a proton density–weighted three-dimensional fast spin-echo sequence, a morphometric analysis approach was developed, including deep learning based landmark localization, bone segmentation of the femora and pelvis, and a shape model for annotation transfer. The centrum-collum-diaphyseal, center-edge (CE), three alpha angles, head-neck offset (HNO), and HNO ratio along with the acetabular depth, inclination, and anteversion were derived. Quantitative validation was provided by comparison with average manual assessments of radiologists in a cross-validation format. High agreement in mean Dice similarity coefficients was achieved (average of 97.52% 0.46 [standard deviation]). The subsequent morphometric analysis produced results with low mean MAD values, with the highest values of 3.34 (alpha 03:00 o'clock position) and 0.87 mm (HNO) and ICC values ranging between 0.288 (HNO ratio) and 0.858 (CE) compared with manual assessments. These values were in line with interreader agreements, which at most had MAD values of 4.02 (alpha 12:00 o'clock position) and 1.07 mm (HNO) and ICC values ranging between 0.218 (HNO ratio) and 0.777 (CE). Automatic extraction of geometric hip parameters from MRI is feasible using a morphometric analysis approach with deep learning.


Where is Artificial Intelligence (AI) Going in 2021?

#artificialintelligence

Artificial intelligence is going to the Edge and more so in 2021. Edge Computing refers to on the device closer to where the data is generated at the edge of the network. We'll see AI increasingly inferencing on the devices around us including mobile devices, sensors and smart cameras with Graphical Processing Units (GPUs, or specialised AI chips for sake of simplicity) embedded in the device. Indeed it is interesting to wonder where NVIDIA (arguably the world leader in GPUs) will go next following the acquisition of ARM for $40 billion. Counterpoint Research forecast that the number of mobile devices with GPUs (or AI Chips) will increase from 190 million in 2019 to 1.25 billion by the end of 2022, accounting for 3 out of 4 mobile devices.


Google's AI DeepMind discovers 3D structures of all proteins, puts it online for free

#artificialintelligence

The evolution of human beings has been a constant process and so has the development of the medical world around us, the key to which is understanding genome structures. The human genome has instructions of over 20,000 proteins but barely one-third of them have been determined. Now, Artificial Intelligence (AI) has predicted the structure of nearly all human proteins, which has baffled scientists for decades in the past. The AI, AlphaFold, developed by Google's DeepMind has gathered the database of the genomic instructions and is making it all available online for researchers to use free of cost. The proteins have been a challenge for scientists for decades due to their unique and confounding 3D structures made from amino acids.


Prediction of Submucosal Invasion for Gastric Neoplasms in Endoscopic Images Using Deep-Learning – Digital Health and Patient Safety Platform

#artificialintelligence

Endoscopic resection is recommended for gastric neoplasms confined to mucosa or superficial submucosa. The determination of invasion depth is based on gross morphology assessed in endoscopic images, or on endoscopic ultrasound. These methods have limited accuracy and pose an inter-observer variability. Several studies developed deep-learning (DL) algorithms classifying invasion depth of gastric cancers. Nevertheless, these algorithms are intended to be used after definite diagnosis of gastric cancers, which is not always feasible in various gastric neoplasms.


DeepMind's AI for protein structure is coming to the masses

#artificialintelligence

The structure of human interleukin-12 protein bound to its receptor, as predicted by machine-learning software.Credit: Ian Haydon, UW Medicine Institute for Protein Design Software that accurately determines the 3D shape of proteins is set to become widely available to scientists. On 15 July, the London-based company DeepMind released an open-source version of its deep-learning neural network AlphaFold 2 and described its approach in a paper in Nature1. The network dominated a protein-structure prediction competition last year. Meanwhile, an academic team has developed its own protein-prediction tool inspired by AlphaFold 2, which is already gaining popularity with scientists. That system, called RoseTTaFold, performs nearly as well as AlphaFold 2, and is described in a Science paper also published on 15 July2.


Detection and Semiquantitative Analysis of Cardiomegaly, Pneumothorax, and Pleural Effusion on Chest Radiographs

#artificialintelligence

To develop and evaluate deep learning models for the detection and semiquantitative analysis of cardiomegaly, pneumothorax, and pleural effusion on chest radiographs. In this retrospective study, models were trained for lesion detection or for lung segmentation. The first dataset for lesion detection consisted of 2838 chest radiographs from 2638 patients (obtained between November 2018 and January 2020) containing findings positive for cardiomegaly, pneumothorax, and pleural effusion that were used in developing Mask region-based convolutional neural networks plus Point-based Rendering models. Separate detection models were trained for each disease. The second dataset was from two public datasets, which included 704 chest radiographs for training and testing a U-Net for lung segmentation.


Deep Learning Approach to Detect Banana Plant Diseases

#artificialintelligence

Hello folks:) This is my final year research project based on deep learning. Let me give an introduction about my project first. When we talk about banana it's a famous fruit that commonly available across the world, because it instantly boosts your energy. Bananas are one most consumed fruit in the world. According to modern calculations, Bananas are grown in around 107 countries since it makes a difference to lower blood pressure and to reduce the chance of cancer and asthma.


Deep learning and liver disease

#artificialintelligence

Many medical imaging techniques have played a pivotal role in the early detection, diagnosis, and treatment of diseases, such as computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, positron emission tomography (PET), mammography, and X-ray. AI has made significant progress which allows machines to automatically represent and explain complicated data. It is widely applied in the medical field, especially in some domains that need imaging data analysis. According to Vivantil et al by using deep learning models based on longitudinal liver CT studies, new liver tumours could be detected automatically with a true positive rate of 86%, while the stand-alone detection rate was only 72% and this method achieved a precision of 87% and an improvement of 39% over the traditional SVM mode. CNN models which use ultrasound images to detect liver lesions were also developed. According to Liu et al by using a CNN model based on liver ultrasound images, the proposed method can effectively extract the liver capsules and accurately diagnose liver cirrhosis, with the diagnostic AUC being able to reach 0.968.


Hospitals deployed AI in mass to help with COVID-19; Research suggests they don't work

#artificialintelligence

A June report from the Turing Institute, the U.K.'s center for data science and artificial intelligence, found that AI tools made little to no effect in combatting COVID-19. A separate study published in the British Medical Journal analyzed 232 algorithms designed to diagnose patients or predict how sick they may become with COVID-19. Researchers found none of them were fit for clinical use and only two were promising enough for future testing. A study published in Nature Machine Intelligence looked at 415 deep-learning models created to diagnose COVID-19 patients and predict patient risk from medical images. Researchers concluded none of them were fit for clinical use.


DeepMind releases database with AI predictions for every human protein shape

#artificialintelligence

DeepMind released a free, open-source, big-deal database last week containing AI predictions for the shapes of every protein in the human body. Not only is it the most complete picture of the human proteome (full set of proteins) to date, according to the London-based AI lab--it's also "doubling humanity's accumulated knowledge of high-accuracy human protein structures." Deepening our understanding of protein structures can lead to major leaps forward in understanding diseases, as well as in drug and vaccine development. That could help allay anything from neglected diseases to the next pandemic. Recap: In December 2020, AlphaFold, DeepMind's neural network, made a breakthrough in protein folding--a biological mystery that's puzzled scientists for 50 years.