Goto

Collaborating Authors

image segmentation


Methods in AI: The Magnificent Seven -- Learn

#artificialintelligence

What does it mean when we say that a model "learns from experience"? The system architect doesn't "teach" the model anything formal. Rather, the model learns from -- and subsequently the model's entire world is defined by -- the data used to train it. The model has access to past experience and history to be used as training data, and long established learning techniques are used to understand and discern any underlying patterns. The understanding and recognition of these patterns are codified as intelligence so that the model knows what to make of the data when it is encountered in a "post-training" environment.


From Zero to Topo ( Part 3)

#artificialintelligence

A journey to learn about topology-preserving image segmentation. We continue the journey with the third paper about this topic "Topology-Preserving Deep Image Segmentation". As we mentioned in the previous articles, the state-of-the-art segmentation algorithms are still prone to make errors on fine-scale structures, such as small object instances, instances with multiple connected components, and thin connections. Therefore, Xiaoling et al. propose TopoNet, a novel deep segmentation method that learns to segment with correct topology. In particular, they proposed a topological loss that enforces the segmentation results to have the same topology as the ground truth, i.e., having the same Betti number (number of connected components and handles).


Machine learning radically reduces workload of cell counting for disease diagnosis

#artificialintelligence

The use of machine learning to perform blood cell counts for diagnosis of disease instead of expensive and often less accurate cell analyzer machines has nevertheless been very labor-intensive as it takes an enormous amount of manual annotation work by humans in the training of the machine learning model. However, researchers at Benihang University have developed a new training method that automates much of this activity. Their new training scheme is described in a paper published in the journal Cyborg and Bionic Systems on April 9. The number and type of cells in the blood often play a crucial role in disease diagnosis, but the cell analysis techniques commonly used to perform such counting of blood cells--involving the detection and measurement of physical and chemical characteristics of cells suspended in fluid--are expensive and require complex preparations. Worse still, the accuracy of cell analyzer machines is only about 90 percent due to various influences such as temperature, pH, voltage, and magnetic field that can confuse the equipment.


Real Time Image Segmentation Using 5 Lines of Code - KDnuggets

#artificialintelligence

Image segmentation is an aspect of computer vision that deals with segmenting the contents of objects visualized by a computer into different categories for better analysis. The contributions of image segmentation in solving a lot of computer vision problems such as analysis of medical images, background editing, vision in self driving cars and analysis of satellite images make it an invaluable field in computer vision. One of the greatest challenges in computer vision is keeping the space between accuracy and speed performance for real time applications. In the field of computer vision there is this dilemma of a computer vision solution either being more accurate and slow or less accurate and faster. PixelLib Library is a library created to allow easy integration of object segmentation in images and videos using few lines of python code.


RStudio AI Blog: Train in R, run on Android: Image segmentation with torch

#artificialintelligence

In a sense, image segmentation is not that different from image classification. And as in image classification, the categories of interest depend on the task: Foreground versus background, say; different types of tissue; different types of vegetation; et cetera. The present post is not the first on this blog to treat that topic; and like all prior1 ones, it makes use of a U-Net architecture2 to achieve its goal. It demonstrates how to perform data augmentation for an image segmentation task. It uses luz, torch's high-level interface, to train the model.


Vision beyond classification: Tasks beyond classification: Task II: Image Segmentation

#artificialintelligence

Image segmentation is a computer vision task in which we label specific regions of pixels in an image with their corresponding classes. Since we predict every pixel in the image, this task is commonly referred to as a dense prediction problem, whereas classification is a sparse prediction problem. There are two types of image segmentation: Semantic segmentation and Instance segmentation. Semantic segmentation is the process of labeling one or more specific regions of interest in an image. This process treats multiple objects within a single category as one entity.


Artificial intelligence to bring museum specimens to the masses

#artificialintelligence

Scientists are using cutting-edge artificial intelligence to help extract complex information from large collections of museum specimens. A team from Cardiff University is using state-of-the-art techniques to automatically segment and capture information from museum specimens and perform important data quality improvement without the need of human input. They have been working with museums from across Europe, including the Natural History Museum, London, to refine and validate their new methods and contribute to the mammoth task of digitizing hundreds of millions of specimens. With more than 3 billion biological and geological specimens curated in natural history museums around the world, the digitization of museum specimens, in which physical information from a particular specimen is transformed into a digital format, has become an increasingly important task for museums as they adapt to an increasingly digital world. A treasure trove of digital information is invaluable for scientists trying to model the past, present and future of organisms and our planet, and could be key to tackling some of the biggest societal challenges our world faces today, from conserving biodiversity and tackling climate change to finding new ways to cope with emerging diseases like COVID-19.


7 Best Free Computer Vision Courses

#artificialintelligence

This is a Free to Audit course on Coursera. That means you can access the course material free of cost but for the certificate, you have to pay. In this course, you will understand the basics of computer vision and learn color, light, and image formation; early, mid-level, and high-level vision; and mathematics essential for computer vision. Throughout this course, you will apply mathematical techniques to complete computer vision tasks. You will get a free license to install MATLAB for the duration of the course is available from MathWorks.


A Field of Experts Prior for Adapting Neural Networks at Test Time

arXiv.org Machine Learning

Performance of convolutional neural networks (CNNs) in image analysis tasks is often marred in the presence of acquisition-related distribution shifts between training and test images. Recently, it has been proposed to tackle this problem by fine-tuning trained CNNs for each test image. Such test-time-adaptation (TTA) is a promising and practical strategy for improving robustness to distribution shifts as it requires neither data sharing between institutions nor annotating additional data. Previous TTA methods use a helper model to increase similarity between outputs and/or features extracted from a test image with those of the training images. Such helpers, which are typically modeled using CNNs, can be task-specific and themselves vulnerable to distribution shifts in their inputs. To overcome these problems, we propose to carry out TTA by matching the feature distributions of test and training images, as modelled by a field-of-experts (FoE) prior. FoEs model complicated probability distributions as products of many simpler expert distributions. We use 1D marginal distributions of a trained task CNN's features as experts in the FoE model. Further, we compute principal components of patches of the task CNN's features, and consider the distributions of PCA loadings as additional experts. We validate the method on 5 MRI segmentation tasks (healthy tissues in 4 anatomical regions and lesions in 1 one anatomy), using data from 17 clinics, and on a MRI registration task, using data from 3 clinics. We find that the proposed FoE-based TTA is generically applicable in multiple tasks, and outperforms all previous TTA methods for lesion segmentation. For healthy tissue segmentation, the proposed method outperforms other task-agnostic methods, but a previous TTA method which is specifically designed for segmentation performs the best for most of the tested datasets. Our code is publicly available.


An Embarrassingly Simple Consistency Regularization Method for Semi-Supervised Medical Image Segmentation

arXiv.org Artificial Intelligence

The scarcity of pixel-level annotation is a prevalent problem in medical image segmentation tasks. In this paper, we introduce a novel regularization strategy involving interpolation-based mixing for semi-supervised medical image segmentation. The proposed method is a new consistency regularization strategy that encourages segmentation of interpolation of two unlabelled data to be consistent with the interpolation of segmentation maps of those data. This method represents a specific type of data-adaptive regularization paradigm which aids to minimize the overfitting of labelled data under high confidence values. The proposed method is advantageous over adversarial and generative models as it requires no additional computation. Upon evaluation on two publicly available MRI datasets: ACDC and MMWHS, experimental results demonstrate the superiority of the proposed method in comparison to existing semi-supervised models. Code is available at: https://github.com/hritam-98/ICT-MedSeg