Goto

Collaborating Authors

segmentation


Machine learning radically reduces workload of cell counting for disease diagnosis

#artificialintelligence

The use of machine learning to perform blood cell counts for diagnosis of disease instead of expensive and often less accurate cell analyzer machines has nevertheless been very labor-intensive as it takes an enormous amount of manual annotation work by humans in the training of the machine learning model. However, researchers at Benihang University have developed a new training method that automates much of this activity. Their new training scheme is described in a paper published in the journal Cyborg and Bionic Systems on April 9. The number and type of cells in the blood often play a crucial role in disease diagnosis, but the cell analysis techniques commonly used to perform such counting of blood cells--involving the detection and measurement of physical and chemical characteristics of cells suspended in fluid--are expensive and require complex preparations. Worse still, the accuracy of cell analyzer machines is only about 90 percent due to various influences such as temperature, pH, voltage, and magnetic field that can confuse the equipment.


Real Time Image Segmentation Using 5 Lines of Code - KDnuggets

#artificialintelligence

Image segmentation is an aspect of computer vision that deals with segmenting the contents of objects visualized by a computer into different categories for better analysis. The contributions of image segmentation in solving a lot of computer vision problems such as analysis of medical images, background editing, vision in self driving cars and analysis of satellite images make it an invaluable field in computer vision. One of the greatest challenges in computer vision is keeping the space between accuracy and speed performance for real time applications. In the field of computer vision there is this dilemma of a computer vision solution either being more accurate and slow or less accurate and faster. PixelLib Library is a library created to allow easy integration of object segmentation in images and videos using few lines of python code.


Self-Driving Cars With Convolutional Neural Networks (CNN) - neptune.ai

#artificialintelligence

Humanity has been waiting for self-driving cars for several decades. Thanks to the extremely fast evolution of technology, this idea recently went from "possible" to "commercially available in a Tesla". Deep learning is one of the main technologies that enabled self-driving. It's a versatile tool that can solve almost any problem – it can be used in physics, for example, the proton-proton collision in the Large Hadron Collider, just as well as in Google Lens to classify pictures. Deep learning is a technology that can help solve almost any type of science or engineering problem. CNN is the primary algorithm that these systems use to recognize and classify different parts of the road, and to make appropriate decisions. Along the way, we'll see how Tesla, Waymo, and Nvidia use CNN algorithms to make their cars driverless or autonomous. The first self-driving car was invented in 1989, it was the Automatic Land Vehicle in Neural Network (ALVINN). It used neural networks to detect lines, segment the environment, navigate itself, and drive. It worked well, but it was limited by slow processing powers and insufficient data.


A new state of the art for unsupervised computer vision

#artificialintelligence

Labeling data can be a chore. It's the main source of sustenance for computer-vision models; without it, they'd have a lot of difficulty identifying objects, people, and other important image characteristics. Yet producing just an hour of tagged and labeled data can take a whopping 800 hours of human time. Our high-fidelity understanding of the world develops as machines can better perceive and interact with our surroundings. But they need more help.


3D Point Cloud Clustering Tutorial with K-means and Python

#artificialintelligence

If you are on the quest for a (Supervised) Deep Learning algorithm for semantic segmentation -- keywords alert -- you certainly have found yourself searching for some high-quality labels a high quantity of data points. In our 3D data world, the unlabelled nature of the 3D point clouds makes it particularly challenging to answer both criteria: without any good training set, it is hard to "train" any predictive model. Should we explore python tricks and add them to our quiver to quickly produce awesome 3D labeled point cloud datasets? Let us dive right in! Why unsupervised segmentation & clustering is the "bulk of AI"? Deep Learning (DL) through supervised systems is extremely useful. DL architectures have profoundly changed the technological landscape in the last years.


Fully Automated Wound Tissue Segmentation Using Deep Learning on Mobile Devices: Cohort Study

#artificialintelligence

Background: Composition of tissue types within a wound is a useful indicator of its healing progression. Tissue composition is clinically used in wound healing tools (eg, Bates-Jensen Wound Assessment Tool) to assess risk and recommend treatment. However, wound tissue identification and the estimation of their relative composition is highly subjective. Consequently, incorrect assessments could be reported, leading to downstream impacts including inappropriate dressing selection, failure to identify wounds at risk of not healing, or failure to make appropriate referrals to specialists. Objective: This study aimed to measure inter- and intrarater variability in manual tissue segmentation and quantification among a cohort of wound care clinicians and determine if an objective assessment of tissue types (ie, size and amount) can be achieved using deep neural networks. Methods: A data set of 58 anonymized wound images of various types of chronic wounds from Swift Medical’s Wound Database was used to conduct the inter- and intrarater agreement study. The data set was split into 3 subsets with 50% overlap between subsets to measure intrarater agreement. In this study, 4 different tissue types (epithelial, granulation, slough, and eschar) within the wound bed were independently labeled by the 5 wound clinicians at 1-week intervals using a browser-based image annotation tool. In addition, 2 deep convolutional neural network architectures were developed for wound segmentation and tissue segmentation and were used in sequence in the workflow. These models were trained using 465,187 and 17,000 image-label pairs, respectively. This is the largest and most diverse reported data set used for training deep learning models for wound and wound tissue segmentation. The resulting models offer robust performance in diverse imaging conditions, are unbiased toward skin tones, and could execute in near real time on mobile devices. Results: A poor to moderate interrater agreement in identifying tissue types in chronic wound images was reported. A very poor Krippendorff α value of .014 for interrater variability when identifying epithelization was observed, whereas granulation was most consistently identified by the clinicians. The intrarater intraclass correlation (3,1), however, indicates that raters were relatively consistent when labeling the same image multiple times over a period. Our deep learning models achieved a mean intersection over union of 0.8644 and 0.7192 for wound and tissue segmentation, respectively. A cohort of wound clinicians, by consensus, rated 91% (53/58) of the tissue segmentation results to be between fair and good in terms of tissue identification and segmentation quality. Conclusions: The interrater agreement study validates that clinicians exhibit considerable variability when identifying and visually estimating wound tissue proportion. The proposed deep learning technique provides objective tissue identification and measurements to assist clinicians in documenting the wound more accurately and could have a significant impact on wound care when deployed at scale.


RStudio AI Blog: Train in R, run on Android: Image segmentation with torch

#artificialintelligence

In a sense, image segmentation is not that different from image classification. And as in image classification, the categories of interest depend on the task: Foreground versus background, say; different types of tissue; different types of vegetation; et cetera. The present post is not the first on this blog to treat that topic; and like all prior1 ones, it makes use of a U-Net architecture2 to achieve its goal. It demonstrates how to perform data augmentation for an image segmentation task. It uses luz, torch's high-level interface, to train the model.


Vision beyond classification: Tasks beyond classification: Task II: Image Segmentation

#artificialintelligence

Image segmentation is a computer vision task in which we label specific regions of pixels in an image with their corresponding classes. Since we predict every pixel in the image, this task is commonly referred to as a dense prediction problem, whereas classification is a sparse prediction problem. There are two types of image segmentation: Semantic segmentation and Instance segmentation. Semantic segmentation is the process of labeling one or more specific regions of interest in an image. This process treats multiple objects within a single category as one entity.


Python Library for Image Annotation Conversion

#artificialintelligence

In research and practice having summary statistics of the dataset is important in many ways. Evermore, It's gives you a holistic idea of the datasets you are going to deal with.


Membrane marker selection for segmenting single cell spatial proteomics data - Nature Communications

#artificialintelligence

The ability to profile spatial proteomics at the single cell level enables the study of cell types, their spatial distribution, and interactions in several tissues and conditions. Current methods for cell segmentation in such studies rely on known membrane or cell boundary markers. However, for many tissues, an optimal set of markers is not known, and even within a tissue, different cell types may express different markers. Here we present RAMCES, a method that uses a convolutional neural network to learn the optimal markers for a new sample and outputs a weighted combination of the selected markers for segmentation. Testing RAMCES on several existing datasets indicates that it correctly identifies cell boundary markers, improving on methods that rely on a single marker or those that extend nuclei segmentations. Application to new spatial proteomics data demonstrates its usefulness for accurately assigning cell types based on the proteins expressed in segmented cells. Cell segmentation of single-cell spatial proteomics data remains a challenge and often relies on the selection of a membrane marker, which is not always known. Here, the authors introduce RAMCES, a method that selects the optimal membrane markers to use for more accurate cell segmentation.