misclassification
The Condition-Number Principle for Prototype Clustering
We develop a geometric framework that links objective accuracy to structural recovery in prototype-based clustering. The analysis is algorithm-agnostic and applies to a broad class of admissible loss functions. We define a clustering condition number that compares within-cluster scale to the minimum loss increase required to move a point across a cluster boundary. When this quantity is small, any solution with a small suboptimality gap must also have a small misclassification error relative to a benchmark partition. The framework also clarifies a fundamental trade-off between robustness and sensitivity to cluster imbalance, leading to sharp phase transitions for exact recovery under different objectives. The guarantees are deterministic and non-asymptotic, and they separate the role of algorithmic accuracy from the intrinsic geometric difficulty of the instance. We further show that errors concentrate near cluster boundaries and that sufficiently deep cluster cores are recovered exactly under strengthened local margins. Together, these results provide a geometric principle for interpreting low objective values as reliable evidence of meaningful clustering structure.
- Asia > China > Guangdong Province > Shenzhen (0.04)
- Asia > China > Tianjin Province > Tianjin (0.04)
- Asia > China > Heilongjiang Province > Harbin (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.27)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Europe > Russia (0.14)
- (92 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Questionnaire & Opinion Survey (1.00)
- (2 more...)
- Media (1.00)
- Leisure & Entertainment (1.00)
- Information Technology > Security & Privacy (1.00)
- (10 more...)
- North America > Canada > Ontario > Toronto (0.04)
- North America > Canada > Alberta (0.04)
- Europe > Denmark > North Jutland > Aalborg (0.04)
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis (0.68)
- Information Technology > Artificial Intelligence > Natural Language (0.68)
Explanation-based Data Augmentation for Image Classification
Existing works have generated explanations for deep neural network decisions toprovide insights into model behavior. Weobservethat these explanations can also be used to identify concepts that caused misclassifications. This allows us to understand the possible limitations of the dataset used to train the model, particularly the under-represented regions in the dataset.
- Health & Medicine (0.67)
- Education (0.46)
To Err Like Human: Affective Bias-Inspired Measures for Visual Emotion Recognition Evaluation
Accuracy is a commonly adopted performance metric in various classification tasks, which measures the proportion of correctly classified samples among all samples. It assumes equal importance for all classes, hence equal severity for misclassifications. However, in the task of emotional classification, due to the psychological similarities between emotions, misclassifying a certain emotion into one class may be more severe than another, e.g., misclassifying'excitement' as'anger' apparently is more severe than as'awe'. Albeit high meaningful for many applications, metrics capable of measuring these cases of misclassifications in visual emotion recognition tasks have yet to be explored. In this paper, based on Mikel's emotion wheel from psychology, we propose a novel approach for evaluating the performance in visual emotion recognition, which takes into account the distance on the emotion wheel between different emotions to mimic the psychological nuances of emotions. Experimental results in semi-supervised learning on emotion recognition and user study have shown that our proposed metrics is more effective than the accuracy to assess the performance and conforms to the cognitive laws of human emotions.
Constrained Optimization to Train Neural Networks on Critical and Under-Represented Classes
Deep neural networks (DNNs) are notorious for making more mistakes for the classes that have substantially fewer samples than the others during training. Such class imbalance is ubiquitous in clinical applications and very crucial to handle because the classes with fewer samples most often correspond to critical cases (e.g., cancer) where misclassifications can have severe consequences.Not to miss such cases, binary classifiers need to be operated at high True Positive Rates (TPRs) by setting a higher threshold, but this comes at the cost of very high False Positive Rates (FPRs) for problems with class imbalance. Existing methods for learning under class imbalance most often do not take this into account. We argue that prediction accuracy should be improved by emphasizing the reduction of FPRs at high TPRs for problems where misclassification of the positive, i.e. critical, class samples are associated with higher cost.To this end, we pose the training of a DNN for binary classification as a constrained optimization problem and introduce a novel constraint that can be used with existing loss functions to enforce maximal area under the ROC curve (AUC) through prioritizing FPR reduction at high TPR. We solve the resulting constrained optimization problem using an Augmented Lagrangian method (ALM).Going beyond binary, we also propose two possible extensions of the proposed constraint for multi-class classification problems.We present experimental results for image-based binary and multi-class classification applications using an in-house medical imaging dataset, CIFAR10, and CIFAR100. Our results demonstrate that the proposed method improves the baselines in majority of the cases by attaining higher accuracy on critical classes while reducing the misclassification rate for the non-critical class samples.
A Set of Rules for Model Validation
The validation of a data-driven model is the process of asses sing the model's ability to generalize to new, unseen data in the population o f interest. This paper proposes a set of general rules for model validation. T hese rules are designed to help practitioners create reliable validation plans and report their results transparently. While no validation scheme is flawle ss, these rules can help practitioners ensure their strategy is sufficient for pr actical use, openly discuss any limitations of their validation strategy, and r eport clear, comparable performance metrics. Keywords: Validation, Cross-validation 1. Introduction Model validation is a fundamental task in all modern data-dr iven systems, whether they fall under the broad categories of Statistics, Machine Learning (ML), Artificial Intelligence (AI), or more specialized fiel ds like chemometrics. Validation has become a major focus for regulatory and stand ardization bodies, with key reports and standards highlighting the growing con cern for ensuring the trustworthiness and reliability of data-driven models: NIST AI Risk Management Framework (AI RMF 1.0, 2023): Publi shed by the U.S. Department of Commerce, this framework provides management techniques to address the risks and ensure the trustwor thiness of AI systems, with validation as a core component. The EU AI Act of 2024, landmark piece of EU legislation that c ategorizes AI systems by risk level, where validation is not defined as a b est practice but a legal requirement within the conformity assessment. The ISO/IEC TS 4213:2022, by the International Organizati on for Standardization (ISO), describes approaches and methods to ens ure the rele-Email address: josecamacho@ugr.es The IEEE P2841 -2022 is a recommended practice for the fram ework and process for deep learning evaluation.
- North America > United States (0.68)
- Europe > Spain > Andalusia > Granada Province > Granada (0.04)
- Law (1.00)
- Health & Medicine > Pharmaceuticals & Biotechnology (0.93)
- Government > Regional Government > North America Government > United States Government (0.68)