multispectral image
- North America > United States (0.14)
- Asia > South Korea > Seoul > Seoul (0.04)
- Asia > Singapore (0.04)
- Asia > China > Anhui Province > Hefei (0.04)
Automated scalable segmentation of neurons from multispectral images
Reconstruction of neuroanatomy is a fundamental problem in neuroscience. Stochastic expression of colors in individual cells is a promising tool, although its use in the nervous system has been limited due to various sources of variability in expression. Moreover, the intermingled anatomy of neuronal trees is challenging for existing segmentation algorithms. Here, we propose a method to automate the segmentation of neurons in such (potentially pseudo-colored) images. The method uses spatio-color relations between the voxels, generates supervoxels to reduce the problem size by four orders of magnitude before the final segmentation, and is parallelizable over the supervoxels. To quantify performance and gain insight, we generate simulated images, where the noise level and characteristics, the density of expression, and the number of fluorophore types are variable. We also present segmentations of real Brainbow images of the mouse hippocampus, which reveal many of the dendritic segments.
CARL: Camera-Agnostic Representation Learning for Spectral Image Analysis
Baumann, Alexander, Ayala, Leonardo, Seidlitz, Silvia, Sellner, Jan, Studier-Fischer, Alexander, Özdemir, Berkin, Maier-Hein, Lena, Ilic, Slobodan
Spectral imaging offers promising applications across diverse domains, including medicine and urban scene understanding, and is already established as a critical modality in remote sensing. However, variability in channel dimensionality and captured wavelengths among spectral cameras impede the development of AI-driven methodologies, leading to camera-specific models with limited generalizability and inadequate cross-camera applicability. To address this bottleneck, we introduce CARL, a model for Camera-Agnostic Representation Learning across RGB, multispectral, and hyperspectral imaging modalities. To enable the conversion of a spectral image with any channel dimensionality to a camera-agnostic representation, we introduce a novel spectral encoder, featuring a self-attention-cross-attention mechanism, to distill salient spectral information into learned spectral representations. Spatio-spectral pre-training is achieved with a novel feature-based self-supervision strategy tailored to CARL. Large-scale experiments across the domains of medical imaging, autonomous driving, and satellite imaging demonstrate our model's unique robustness to spectral heterogeneity, outperforming on datasets with simulated and real-world cross-camera spectral variations. The scalability and versatility of the proposed approach position our model as a backbone for future spectral foundation models.
- Transportation > Ground > Road (0.48)
- Health & Medicine > Diagnostic Medicine > Imaging (0.48)
- Health & Medicine > Health Care Providers & Services (0.46)
- (2 more...)
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.93)
- North America > United States (0.14)
- Asia > South Korea > Seoul > Seoul (0.04)
- Asia > Singapore (0.04)
- Asia > China > Anhui Province > Hefei (0.04)
Reviews: Automated scalable segmentation of neurons from multispectral images
After reading the author's rebuttal I have increased the technical quality to 2 and after reading the the other reviews I increased the potential impact to 3. The authors replied to many questions but not to all, in particular the answer was not satisfactory to the question about the parameter K which is one of the crucial parameter in any segmentation algorithm. Why they did not provide the results using the suggested automatic method in Fig4 instead of cyclying on possible (wrong) number of clusters? I would have expected to see in the results the performances with at least one auto-tuning heuristic to asses its generality (at least the one suggested by the authors). In the following the issues found in the paper: 1) In Eq(2) when constructing the adjecency matrix, the ranges of the distances d(...) and \delta(...) are the same? In the line 114 d(s) is a measure of heterogeneity, in line 125 of distance and in Eq(2) of color distance.
Enhancing Environmental Monitoring through Multispectral Imaging: The WasteMS Dataset for Semantic Segmentation of Lakeside Waste
Zhu, Qinfeng, Weng, Ningxin, Fan, Lei, Cai, Yuanzhi
Environmental monitoring of lakeside green areas is crucial for environmental protection. Compared to manual inspections, computer vision technologies offer a more efficient solution when deployed on-site. Multispectral imaging provides diverse information about objects under different spectrums, aiding in the differentiation between waste and lakeside lawn environments. This study introduces WasteMS, the first multispectral dataset established for the semantic segmentation of lakeside waste. WasteMS includes a diverse range of waste types in lawn environments, captured under various lighting conditions. We implemented a rigorous annotation process to label waste in images. Representative semantic segmentation frameworks were used to evaluate segmentation accuracy using WasteMS. Challenges encountered when using WasteMS for segmenting waste on lakeside lawns were discussed. The WasteMS dataset is available at https://github.com/zhuqinfeng1999/WasteMS.
- Oceania > Australia (0.04)
- Europe > United Kingdom (0.04)
- Asia > Philippines > Luzon > Bicol Region > Province of Camarines Sur (0.04)
- Asia > China > Shaanxi Province > Xi'an (0.04)
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.70)
GloSoFarID: Global multispectral dataset for Solar Farm IDentification in satellite imagery
Solar Photovoltaic (PV) technology is increasingly recognized as a pivotal solution in the global pursuit of clean and renewable energy. This technology addresses the urgent need for sustainable energy alternatives by converting solar power into electricity without greenhouse gas emissions. It not only curtails global carbon emissions but also reduces reliance on finite, non-renewable energy sources. In this context, monitoring solar panel farms becomes essential for understanding and facilitating the worldwide shift toward clean energy. This study contributes to this effort by developing the first comprehensive global dataset of multispectral satellite imagery of solar panel farms. This dataset is intended to form the basis for training robust machine learning models, which can accurately map and analyze the expansion and distribution of solar panel farms globally. The insights gained from this endeavor will be instrumental in guiding informed decision-making for a sustainable energy future. https://github.com/yzyly1992/GloSoFarID
- North America > United States (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- Europe > Denmark (0.04)
- Energy > Renewable > Solar (1.00)
- Energy > Renewable > Geothermal > Geothermal Energy Exploration and Development > Geophysical Analysis & Survey (0.71)
Few-shot Multispectral Segmentation with Representations Generated by Reinforcement Learning
Jayakody, Dilith, Ambegoda, Thanuja
The task of multispectral image segmentation (segmentation of images with numerous channels/bands, each capturing a specific range of wavelengths of electromagnetic radiation) has been previously explored in contexts with large amounts of labeled data. However, these models tend not to generalize well to datasets of smaller size. In this paper, we propose a novel approach for improving few-shot segmentation performance on multispectral images using reinforcement learning to generate representations. These representations are generated in the form of mathematical expressions between channels and are tailored to the specific class being segmented. Our methodology involves training an agent to identify the most informative expressions, updating the dataset using these expressions, and then using the updated dataset to perform segmentation. Due to the limited length of the expressions, the model receives useful representations without any added risk of overfitting. We evaluate the effectiveness of our approach on several multispectral datasets and demonstrate its effectiveness in boosting the performance of segmentation algorithms.
- Asia > Sri Lanka (0.04)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- Europe > Spain > Andalusia > Granada Province > Granada (0.04)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- Research Report (1.00)
- Overview > Innovation (0.34)
A deep learning experiment for semantic segmentation of overlapping characters in palimpsests
Perino, Michela, Ginolfi, Michele, Felici, Anna Candida, Rosellini, Michela
Palimpsests refer to historical manuscripts where erased writings have been partially covered by the superimposition of a second writing. By employing imaging techniques, e.g., multispectral imaging, it becomes possible to identify features that are imperceptible to the naked eye, including faded and erased inks. When dealing with overlapping inks, Artificial Intelligence techniques can be utilized to disentangle complex nodes of overlapping letters. In this work, we propose deep learning-based semantic segmentation as a method for identifying and segmenting individual letters in overlapping characters. The experiment was conceived as a proof of concept, focusing on the palimpsests of the Ars Grammatica by Prisciano as a case study. Furthermore, caveats and prospects of our approach combined with multispectral imaging are also discussed.
- Europe > Italy > Lazio > Rome (0.06)
- Europe > San Marino > Fiorentino > Fiorentino (0.04)
- Europe > Holy See (0.04)
- Europe > France > Occitanie > Hérault > Montpellier (0.04)
PlantPlotGAN: A Physics-Informed Generative Adversarial Network for Plant Disease Prediction
Lopes, Felipe A., Sagan, Vasit, Esposito, Flavio
Monitoring plantations is crucial for crop management and producing healthy harvests. Unmanned Aerial Vehicles (UAVs) have been used to collect multispectral images that aid in this monitoring. However, given the number of hectares to be monitored and the limitations of flight, plant disease signals become visually clear only in the later stages of plant growth and only if the disease has spread throughout a significant portion of the plantation. This limited amount of relevant data hampers the prediction models, as the algorithms struggle to generalize patterns with unbalanced or unrealistic augmented datasets effectively. To address this issue, we propose PlantPlotGAN, a physics-informed generative model capable of creating synthetic multispectral plot images with realistic vegetation indices. These indices served as a proxy for disease detection and were used to evaluate if our model could help increase the accuracy of prediction models. The results demonstrate that the synthetic imagery generated from PlantPlotGAN outperforms state-of-the-art methods regarding the Fr\'echet inception distance. Moreover, prediction models achieve higher accuracy metrics when trained with synthetic and original imagery for earlier plant disease detection compared to the training processes based solely on real imagery.
- North America > United States > Missouri > St. Louis County > St. Louis (0.04)
- North America > Puerto Rico > Peñuelas > Peñuelas (0.04)
- South America > Brazil (0.04)
- (2 more...)
- Research Report > Promising Solution (0.34)
- Research Report > New Finding (0.34)