Goto

Collaborating Authors

deep learning


Doctors, Get Ready for Your AI Assistants

WIRED

In 2023, radiologists in hospitals around the world will increasingly use medical images--which include x-rays and CT, MRI, and PET scans--that have been first read and evaluated by AI machines. Gastroenterologists will also be relying on machine vision during colonoscopies and endoscopies to pick up polyps that would otherwise be missed. This progress has been made possible by the extensive validation of "machine eyes"--deep neural networks trained with hundreds of thousands of images that can accurately pick up things human experts can't. This story is from the WIRED World in 2023, our annual trends briefing. Read more stories from the series here--or download or order a copy of the magazine.


Fusing batch normalisation and convolution for faster inference

#artificialintelligence

Fusing adjacent convolution (Conv)and batch normalisation (BN)layers is a practical way of boosting inference speed. Batch normalisation is one of the most important regularisation techniques in the modern deep learning field.


Could an Emerging Deep Learning Modality Enhance CCTA Assessment of Coronary Artery Disease?

#artificialintelligence

Keya Medical has launched the DeepVessel FFR, a software device that utilizes deep learning to facilitate fractional flow reserve (FFR) assessment based on coronary computed tomography angiography (CCTA). Cleared by the Food and Drug Administration (FDA), the DeepVessel FFR provides a three-dimensional coronary artery tree model and estimates of FFR CT value after semi-automated review of CCTA images, according to Keya Medical. The company said the DeepVessel FFR has demonstrated higher accuracy than other non-invasive tests and suggested the software could help reduce invasive procedures for coronary angiography and stent implantation in the diagnostic workup and subsequent treatment of coronary artery disease. Joseph Schoepf, M.D., FACR, FAHA, FNASCI, the principal investigator of a recent multicenter trial to evaluate DeepVessel FFR, says the introduction of the modality in the United States dovetails nicely with recent guidelines for the diagnosis of chest pain. "I am excited to see the implementation of DeepVessel FFR. It comes together with the 2021 ACC/AHA Chest Pain Guidelines' recognition of the elevated diagnostic role of CCTA and FFR CT for the non-invasive evaluation of patients with stable or acute chest pain," noted Dr. Schoepf, a professor of Radiology, Medicine, and Pediatrics at the Medical University South Carolina.


The Power of Deep Learning. When it comes to learning, artificial…

#artificialintelligence

When it comes to learning, artificial intelligence has come a long way. In the early days of AI, learning was limited to simple tasks carried out by basic algorithms. But thanks to advances in computation and data storage, AI can now tackle much more complex problems using deep learning. Deep learning is a subset of machine learning that uses algorithms to model high-level patterns in data. By doing so, deep learning can enable machines to carry out tasks that would be difficult or impossible for traditional AI methods.


ChatGPT can write code. Now researchers say it's good at fixing bugs too

ZDNet

OpenAI's ChatGPT chatbot can fix software bugs very well, but its key advantage over other methods and AI models is its unique ability for dialogue with humans that allows it to improve the correctness of an answer. Researchers from Johannes Gutenberg University Mainz and University College London pitted OpenAI's ChatGPT against "standard automated program repair techniques" and two deep learning approaches to program repairs: CoCoNut, from researchers at the University of Waterloo, Canada; and Codex, OpenAI's GPT-3 based model that underpins GitHub's Copilot paired programming auto code completion service. "We find that ChatGPT's bug fixing performance is competitive to the common deep learning approaches CoCoNut and Codex and notably better than the results reported for the standard program repair approaches," the researchers write in a new arXiv paper, first spotted by New Scientist. That ChatGPT can solve coding problems isn't new, but the researchers highlight that its unique capacity for dialogue with humans gives it a potential edge over other approaches and models. The researchers tested ChatGPT's performance using the QuixBugs bug fixing benchmark.


Probabilistic Logistic Regression and Deep Learning

#artificialintelligence

This article belongs to the series "Probabilistic Deep Learning". This weekly series covers probabilistic approaches to deep learning. The main goal is to extend deep learning models to quantify uncertainty, i.e., know what they do not know. In this article, we will introduce the concept of probabilistic logistic regression, a powerful technique that allows for the inclusion of uncertainty in the prediction process. We will explore how this approach can lead to more robust and accurate predictions, especially in cases where the data is noisy, or the model is overfitting.


Data mining of Clinical Databases - CDSS 1

#artificialintelligence

This specialisation is for learners with experience in programming that are interested in expanding their skills in applying deep learning in Electronic Health Records and with a focus on how to translate their models into Clinical Decision Support Systems. The main areas that would explore are: Data mining of Clinical Databases: Ethics, MIMIC III database, International Classification of Disease System and definition of common clinical outcomes. Deep learning in Electronic Health Records: From descriptive analytics to predictive analytics Explainable deep learning models for healthcare applications: What it is and why it is needed Clinical Decision Support Systems: Generalisation, bias, 'fairness', clinical usefulness and privacy of artificial intelligence algorithms.


Finding Social Distance using YOLO and OpenCV

#artificialintelligence

Considering the unfortunate circumstances due to COVID-19, keeping distance among people is crucial. And to find the distance, we can set the goal to detect people using Deep Learning first and then find the distance between them to check whether a norm of social distance of about 6 feet or 1.8m is maintained by people. The blog isn't about YOLO or any Object Tracking architecture for deep learning, will share details of blogs which explains why YOLO is better that other RCNN architectures. Pixel Density is the measure of pixels per inch(ppi) or pixel per centimeter of an electronic device. It is defined as number of pixels found per inch.


Why Simple Models Are Often Better

#artificialintelligence

In data science and machine learning, simplicity is an important concept that can have significant impact on model characteristics such as performance and interpretability. Over-engineered solutions tend to adversely affect these characteristics by increasing the likelihood of overfitting, decreasing computational efficiency, and lowering the transparency of the model's output. The latter is particularly important for areas that require a certain degree of interpretability, such as medicine and healthcare, finance, or law. The inability to interpret and trust a model's decision -- and to ensure that this decision is fair and unbiased -- can have serious consequences for individuals whose fate depends on it. This article aims to highlight the importance of giving precedence to simplicity when it comes to implementing a data science or machine learning solution.


Counterfactual explanations for land cover mapping: interview with Cassio Dantas

AIHub

In their paper Counterfactual Explanations for Land Cover Mapping in a Multi-class Setting, Cassio Dantas, Diego Marcos and Dino Ienco apply counterfactual explanations to remote sensing time series data for land-cover mapping classification. In this interview, Cassio tell us more about explainable AI and counterfactuals, the team's research methodology, and their main findings. Our paper falls into the growing topic of explainable artificial intelligence (XAI). Despite the performances achieved by recent deep learning approaches, they remain black-box models with limited understanding of their internal behavior. To improve general acceptability and trustworthiness of such models, there is a growing need to improve their interpretability and make their decision-making processes more transparent.