in-house
Diagnostic Performance of Deep Learning for Predicting Gliomas' IDH and 1p/19q Status in MRI: A Systematic Review and Meta-Analysis
Farahani, Somayeh, Hejazi, Marjaneh, Tabassum, Mehnaz, Di Ieva, Antonio, Mahdavifar, Neda, Liu, Sidong
Gliomas, the most common primary brain tumors, show high heterogeneity in histological and molecular characteristics. Accurate molecular profiling, like isocitrate dehydrogenase (IDH) mutation and 1p/19q codeletion, is critical for diagnosis, treatment, and prognosis. This review evaluates MRI-based deep learning (DL) models' efficacy in predicting these biomarkers. Following PRISMA guidelines, we systematically searched major databases (PubMed, Scopus, Ovid, and Web of Science) up to February 2024, screening studies that utilized DL to predict IDH and 1p/19q codeletion status from MRI data of glioma patients. We assessed the quality and risk of bias using the radiomics quality score and QUADAS-2 tool. Our meta-analysis used a bivariate model to compute pooled sensitivity, specificity, and meta-regression to assess inter-study heterogeneity. Of the 565 articles, 57 were selected for qualitative synthesis, and 52 underwent meta-analysis. The pooled estimates showed high diagnostic performance, with validation sensitivity, specificity, and area under the curve (AUC) of 0.84 [prediction interval (PI): 0.67-0.93, I2=51.10%, p < 0.05], 0.87 [PI: 0.49-0.98, I2=82.30%, p < 0.05], and 0.89 for IDH prediction, and 0.76 [PI: 0.28-0.96, I2=77.60%, p < 0.05], 0.85 [PI: 0.49-0.97, I2=80.30%, p < 0.05], and 0.90 for 1p/19q prediction, respectively. Meta-regression analyses revealed significant heterogeneity influenced by glioma grade, data source, inclusion of non-radiomics data, MRI sequences, segmentation and feature extraction methods, and validation techniques. DL models demonstrate strong potential in predicting molecular biomarkers from MRI scans, with significant variability influenced by technical and clinical factors. Thorough external validation is necessary to increase clinical utility.
- Asia > Middle East > Iran > Tehran Province > Tehran (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- North America > United States > New York (0.04)
- (4 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Pharmaceuticals & Biotechnology (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
On-Device AI: TensorFlow, PyTorch, or In-House -- Picovoice
There is no shortage of articles discussing which deep learning framework is the best. In this article, we want to focus on a niche. Which framework can make your life easier if your goal is On-Device Deployment? We also explore the controversial topic of building your in-house on-device Inference Engine. TensorFlow comes with TensorFlow Lite for Android, iOS, and single-board computers (e.g.
The Results are In: AI is Key to Building and Maintaining Customer Engagement
The following post was written and/or published as a collaboration between Benzinga's in-house sponsored content team and a financial partner of Benzinga. Today's consumer companies and business-to-business (B2B) firms are finding themselves in a fast-paced, competitive race to attract and secure new clients. One approach that's increasingly gaining popularity is customer engagement. Customer engagement entails prioritizing long-term relationships with consumers and B2B clients. It's about working to develop and maintain relationships throughout multiple interactions and across various channels. The research suggests profound benefits for companies that invest in customer engagement solutions.
- Marketing (0.55)
- Banking & Finance > Trading (0.51)
- Information Technology > Software (0.32)
Joint reconstruction and bias field correction for undersampled MR imaging
Gaillochet, Mélanie, Tezcan, Kerem C., Konukoglu, Ender
Undersampling the k-space in MRI allows saving precious acquisition time, yet results in an ill-posed inversion problem. Recently, many deep learning techniques have been developed, addressing this issue of recovering the fully sampled MR image from the undersampled data. However, these learning based schemes are susceptible to differences between the training data and the image to be reconstructed at test time. One such difference can be attributed to the bias field present in MR images, caused by field inhomogeneities and coil sensitivities. In this work, we address the sensitivity of the reconstruction problem to the bias field and propose to model it explicitly in the reconstruction, in order to decrease this sensitivity. To this end, we use an unsupervised learning based reconstruction algorithm as our basis and combine it with a N4-based bias field estimation method, in a joint optimization scheme. We use the HCP dataset as well as in-house measured images for the evaluations. We show that the proposed method improves the reconstruction quality, both visually and in terms of RMSE.
- North America > United States > Mississippi > Marion County (0.04)
- Europe > Switzerland (0.04)
Inside Microsoft's Plan to Bring AI to its HoloLens Goggles
Tech companies are keen to bring cool artificial intelligence features to phones and augmented reality goggles--the ability to show mechanics how to fix an engine, say, or tell tourists what they are seeing and hearing in their own language. But there's one big challenge: how to manage the vast quantities of data that make such feats possible without making the devices too slow or draining the battery in minutes and wrecking the user experience. Microsoft Corp. says it has the answer with a chip design for its HoloLens goggles--an extra AI processor that analyzes what the user sees and hears right there on the device rather than wasting precious microseconds sending the data back to the cloud. The new processor, a version of the company's existing Holographic Processing Unit, was unveiled at an event in Honolulu, Hawaii, on Sunday. The chip is under development and will be included in the next version of HoloLens; the company didn't provide a date.
Quest for AI Leadership Pushes Microsoft Further Into Chip Development
Tech companies are keen to bring cool artificial intelligence features to phones and augmented reality goggles--the ability to show mechanics how to fix an engine, say, or tell tourists what they are seeing and hearing in their own language. But there's one big challenge: how to manage the vast quantities of data that make such feats possible without making the devices too slow or draining the battery in minutes and wrecking the user experience. Microsoft Corp. says it has the answer with a chip design for its HoloLens goggles--an extra AI processor that analyzes what the user sees and hears right there on the device rather than wasting precious microseconds sending the data back to the cloud. The new processor, a version of the company's existing Holographic Processing Unit, is being unveiled at an event in Honolulu, Hawaii, today. The chip is under development and will be included in the next version of HoloLens; the company didn't provide a date.
Quest for AI Leadership Pushes Microsoft Further Into Chip Development
Microsoft Corp. says it has the answer with a chip design for its HoloLens goggles--an extra AI processor that analyzes what the user sees and hears right there on the device rather than wasting precious microseconds sending the data back to the cloud. "For an autonomous car, you can't afford the time to send it back to the cloud to make the decisions to avoid the crash, to avoid hitting a person. But the rapid development of artificial intelligence has left some traditional chip makers facing real competition for the first time in over a decade. More recently, in an effort to take on Google and Amazon.com Inc. in cloud services, the company used customizable chips known as field programmable gate arrays to unleash its AI prowess on real-world challenges.
- Transportation > Ground > Road (0.90)
- Information Technology > Robotics & Automation (0.90)
- Information Technology > Networks (0.56)
quest-for-ai-leadership-pushes-microsoft-further-into-chip-development?utm_source=MIT+Technology+Review&utm_campaign=7c8e13ebe2-The_Download&utm_medium=email&utm_term=0_997ed6f472-7c8e13ebe2-155909385
Microsoft Corp. says it has the answer with a chip design for its HoloLens goggles--an extra AI processor that analyzes what the user sees and hears right there on the device rather than wasting precious microseconds sending the data back to the cloud. "For an autonomous car, you can't afford the time to send it back to the cloud to make the decisions to avoid the crash, to avoid hitting a person. But the rapid development of artificial intelligence has left some traditional chip makers facing real competition for the first time in over a decade. More recently, in an effort to take on Google and Amazon.com Inc. in cloud services, the company used customizable chips known as field programmable gate arrays to unleash its AI prowess on real-world challenges.
- Transportation > Ground > Road (0.90)
- Information Technology > Robotics & Automation (0.90)
- Information Technology > Networks (0.56)
AI pushes Microsoft further into chip development - TechCentral
Technology companies are keen to bring artificial intelligence features to phones and augmented reality goggles -- the ability show mechanics how to fix an engine, say, or tell tourists what they are seeing and hearing in their own language. But there's one big challenge: how to manage the vast quantities of data that make such feats possible without making the devices too slow or draining the battery in minutes and wrecking the user experience. Microsoft says it has the answer with a chip design for its HoloLens goggles -- an extra AI processor that analyses what the user sees and hears right there on the device rather than wasting precious microseconds sending the data back to the cloud. The new processor, a version of the company's existing Holographic Processing Unit, is being unveiled at an event in Honolulu, Hawaii, on Monday. The chip is under development and will be included in the next version of HoloLens; the company didn't provide a date.