colour space
An Enhanced Harmonic Densely Connected Hybrid Transformer Network Architecture for Chronic Wound Segmentation Utilising Multi-Colour Space Tensor Merging
Cassidy, Bill, Mcbride, Christian, Kendrick, Connah, Reeves, Neil D., Pappachan, Joseph M., Fernandez, Cornelius J., Chacko, Elias, Brüngel, Raphael, Friedrich, Christoph M., Alotaibi, Metib, AlWabel, Abdullah Abdulaziz, Alderwish, Mohammad, Lai, Kuan-Ying, Yap, Moi Hoon
Chronic wounds and associated complications present ever growing burdens for clinics and hospitals world wide. Venous, arterial, diabetic, and pressure wounds are becoming increasingly common globally. These conditions can result in highly debilitating repercussions for those affected, with limb amputations and increased mortality risk resulting from infection becoming more common. New methods to assist clinicians in chronic wound care are therefore vital to maintain high quality care standards. This paper presents an improved HarDNet segmentation architecture which integrates a contrast-eliminating component in the initial layers of the network to enhance feature learning. We also utilise a multi-colour space tensor merging process and adjust the harmonic shape of the convolution blocks to facilitate these additional features. We train our proposed model using wound images from light-skinned patients and test the model on two test sets (one set with ground truth, and one without) comprising only darker-skinned cases. Subjective ratings are obtained from clinical wound experts with intraclass correlation coefficient used to determine inter-rater reliability. For the dark-skin tone test set with ground truth, we demonstrate improvements in terms of Dice similarity coefficient (+0.1221) and intersection over union (+0.1274). Qualitative analysis showed high expert ratings, with improvements of >3% demonstrated when comparing the baseline model with the proposed model. This paper presents the first study to focus on darker-skin tones for chronic wound segmentation using models trained only on wound images exhibiting lighter skin. Diabetes is highly prevalent in countries where patients have darker skin tones, highlighting the need for a greater focus on such cases. Additionally, we conduct the largest qualitative study to date for chronic wound segmentation.
- Europe > United Kingdom > England > Lincolnshire (0.04)
- Europe > Germany > Thuringia > Erfurt (0.04)
- Europe > Germany > North Rhine-Westphalia > Arnsberg Region > Dortmund (0.04)
- (9 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
A Novel Approach to Breast Cancer Histopathological Image Classification Using Cross-Colour Space Feature Fusion and Quantum-Classical Stack Ensemble Method
Mallick, Sambit, Paul, Snigdha, Sen, Anindya
Breast cancer classification stands as a pivotal pillar in ensuring timely diagnosis and effective treatment. This study with histopathological images underscores the profound significance of harnessing the synergistic capabilities of colour space ensembling and quantum-classical stacking to elevate the precision of breast cancer classification. By delving into the distinct colour spaces of RGB, HSV and CIE L*u*v, the authors initiated a comprehensive investigation guided by advanced methodologies. Employing the DenseNet121 architecture for feature extraction the authors have capitalized on the robustness of Random Forest, SVM, QSVC, and VQC classifiers. This research encompasses a unique feature fusion technique within the colour space ensemble. This approach not only deepens our comprehension of breast cancer classification but also marks a milestone in personalized medical assessment. The amalgamation of quantum and classical classifiers through stacking emerges as a potent catalyst, effectively mitigating the inherent constraints of individual classifiers, paving a robust path towards more dependable and refined breast cancer identification. Through rigorous experimentation and meticulous analysis, fusion of colour spaces like RGB with HSV and RGB with CIE L*u*v, presents an classification accuracy, nearing the value of unity. This underscores the transformative potential of our approach, where the fusion of diverse colour spaces and the synergy of quantum and classical realms converge to establish a new horizon in medical diagnostics. Thus the implications of this research extend across medical disciplines, offering promising avenues for advancing diagnostic accuracy and treatment efficacy.
- North America > United States > New York (0.04)
- Europe > Switzerland (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- Asia > India > West Bengal > Kolkata (0.04)
- Research Report > Promising Solution (0.50)
- Overview > Innovation (0.41)
- Health & Medicine > Therapeutic Area > Oncology > Breast Cancer (1.00)
- Health & Medicine > Therapeutic Area > Obstetrics/Gynecology (1.00)
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Decision Tree Learning (0.68)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Diagnosis (0.48)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.47)
Galaxy Classification Using Transfer Learning and Ensemble of CNNs With Multiple Colour Spaces
Big data has become the norm in astronomy, making it an ideal domain for computer science research. Astronomers typically classify galaxies based on their morphologies, a practice that dates back to Hubble (1936). With small datasets, classification could be performed by individuals or small teams, but the exponential growth of data from modern telescopes necessitates automated classification methods. In December 2013, Winton Capital, Galaxy Zoo, and the Kaggle team created the Galaxy Challenge, which tasked participants with developing models to classify galaxies. The Kaggle Galaxy Zoo dataset has since been widely used by researchers. This study investigates the impact of colour space transformation on classification accuracy and explores the effect of CNN architecture on this relationship. Multiple colour spaces (RGB, XYZ, LAB, etc.) and CNN architectures (VGG, ResNet, DenseNet, Xception, etc.) are considered, utilizing pre-trained models and weights. However, as most pre-trained models are designed for natural RGB images, we examine their performance with transformed, non-natural astronomical images. We test our hypothesis by evaluating individual networks with RGB and transformed colour spaces and examining various ensemble configurations. A minimal hyperparameter search ensures optimal results. Our findings indicate that using transformed colour spaces in individual networks yields higher validation accuracy, and ensembles of networks and colour spaces further improve accuracy. This research aims to validate the utility of colour space transformation for astronomical image classification and serve as a benchmark for future studies.
- Oceania > Australia > Western Australia > Perth (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Europe > United Kingdom > England > Merseyside > Liverpool (0.04)
- (3 more...)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Pattern Recognition (0.68)
Roadmap to Computer Vision - KDnuggets
Computer Vision (CV) is nowadays one of the main application of Artificial Intelligence (eg. In this article, I will walk you through some of the main steps which compose a Computer Vision System. We will now briefly walk through some of the main processes our data might go through each of these three different steps. When trying to implement a CV system, we need to take into consideration two main components: the image acquisition hardware and the image processing software. One of the main requirements to meet in order to deploy a CV system is to test its robustness.
DIFAR: Deep Image Formation and Retouching
Moran, Sean, Slabaugh, Gregory
Given (a) poorly exposed image, DIF AR(c) produces an image with pleasing contrast and colour better matching the groundtruth (d) compared to the state-of-the-art DeepUPE model [42] (b). Abstract W e present a novel neural network architecture for the image signal processing (ISP) pipeline. In a camera system, the ISP is a critical component that forms a high quality RGB image from RA W camera sensor data. Typical ISP pipelines sequentially apply a complex set of traditional image processing modules, such as demosaicing, denoising, tone mapping, etc. W e introduce a new deep network that replaces all these modules, dubbed Deep Image Formation And Retouching (DIFAR) . DIF AR introduces a multi-scale context-aware pixel-level block for local de-noising/demosaicing operations and a retouching block for global refinement of image colour, luminance and saturation. DIF AR can also be trained for RGB to RGB image enhancement. DIF AR is parameter-efficient and outperforms recently proposed deep learning approaches in both objective and perceptual metrics, setting new state-of-the-art performance on multiple datasets including Samsung S7 [38] and MIT-Adobe 5k [6]. 1. Introduction Image quality is of fundamental importance in any imaging system, including DSLR and smartphone cameras. At the imaging sensor, RA W data is normally captured on a color filter array (such as the well-known Bayer pattern) where at each pixel, only a red, green, or blue color is available. This mosaiced RA W data suffers from noise, vignetting, lack of white balance, and many other defects and additionally has a high dynamic range. The camera's image signal processing (ISP) pipeline is responsible for forming a high quality RGB image with minimal noise, pleasing colors, sharp detail, and good contrast from the degraded RA W data. In most cases, the ISP is realised as a modular sequence of traditional image signal processing algorithms (Figure 2) each responsible for a single well-defined image operation (e.g.
- North America > Mexico > Gulf of Mexico (0.46)
- North America > United States > Hawaii > Honolulu County > Honolulu (0.04)
ŷhat Self-Organising Maps: In Depth
About David: David Asboth is a Data Scientist with a software development background. He's had many different job titles over the years, with a common theme: he solves human problems with computers and data. This post originally appeared on his blog, davidasboth.com In Part 1, I introduced the concept of Self-Organising Maps (SOMs). Now in Part 2 I want to step through the process of training and using a SOM – both the intuition and the Python code.
ŷhat Self-Organising Maps: An Introduction
About David: David Asboth is a Data Scientist with a software development background. He's had many different job titles over the years, with a common theme: he solves human problems with computers and data. This post originally appeared on his blog, davidasboth.com When you learn about machine learning techniques, you usually get a selection of the usual suspects. In fact, KDNuggets has a good post about the 10 machine learning algorithms you should know.
A Fusion Approach for Efficient Human Skin Detection
Tan, Wei Ren, Chan, Chee Seng, Yogarajah, Pratheepan, Condell, Joan
A reliable human skin detection method that is adaptable to different human skin colours and illu- mination conditions is essential for better human skin segmentation. Even though different human skin colour detection solutions have been successfully applied, they are prone to false skin detection and are not able to cope with the variety of human skin colours across different ethnic. Moreover, existing methods require high computational cost. In this paper, we propose a novel human skin de- tection approach that combines a smoothed 2D histogram and Gaussian model, for automatic human skin detection in colour image(s). In our approach an eye detector is used to refine the skin model for a specific person. The proposed approach reduces computational costs as no training is required; and it improves the accuracy of skin detection despite wide variation in ethnicity and illumination. To the best of our knowledge, this is the first method to employ fusion strategy for this purpose. Qualitative and quantitative results on three standard public datasets and a comparison with state-of-the-art methods have shown the effectiveness and robustness of the proposed approach.
- Europe > United Kingdom > Northern Ireland (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- Asia > South Korea > Ulsan > Ulsan (0.04)
- (2 more...)