interval label
A Novel Hybrid Ordinal Learning Model with Health Care Application
Wang, Lujia, Wang, Hairong, Su, Yi, Lure, Fleming, Li, Jing
Ordinal learning (OL) is a type of machine learning models with broad utility in health care applications such as diagnosis of different grades of a disease (e.g., mild, modest, severe) and prediction of the speed of disease progression (e.g., very fast, fast, moderate, slow). This paper aims to tackle a situation when precisely labeled samples are limited in the training set due to cost or availability constraints, whereas there could be an abundance of samples with imprecise labels. We focus on imprecise labels that are intervals, i.e., one can know that a sample belongs to an interval of labels but cannot know which unique label it has. This situation is quite common in health care datasets due to limitations of the diagnostic instrument, sparse clinical visits, or/and patient dropout. Limited research has been done to develop OL models with imprecise/interval labels. We propose a new Hybrid Ordinal Learner (HOL) to integrate samples with both precise and interval labels to train a robust OL model. We also develop a tractable and efficient optimization algorithm to solve the HOL formulation. We compare HOL with several recently developed OL methods on four benchmarking datasets, which demonstrate the superior performance of HOL. Finally, we apply HOL to a real-world dataset for predicting the speed of progressing to Alzheimer's Disease (AD) for individuals with Mild Cognitive Impairment (MCI) based on a combination of multi-modality neuroimaging and demographic/clinical datasets. HOL achieves high accuracy in the prediction and outperforms existing methods. The capability of accurately predicting the speed of progression to AD for each individual with MCI has the potential for helping facilitate more individually-optimized interventional strategies.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- North America > United States > Maryland > Montgomery County > Rockville (0.04)
- (9 more...)
- Health & Medicine > Therapeutic Area > Neurology > Alzheimer's Disease (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
A Novel Markovian Framework for Integrating Absolute and Relative Ordinal Emotion Information
Wu, Jingyao, Dang, Ting, Sethu, Vidhyasaharan, Ambikairajah, Eliathamby
There is growing interest in affective computing for the representation and prediction of emotions along ordinal scales. However, the term ordinal emotion label has been used to refer to both absolute notions such as low or high arousal, as well as relation notions such as arousal is higher at one instance compared to another. In this paper, we introduce the terminology absolute and relative ordinal labels to make this distinction clear and investigate both with a view to integrate them and exploit their complementary nature. We propose a Markovian framework referred to as Dynamic Ordinal Markov Model (DOMM) that makes use of both absolute and relative ordinal information, to improve speech based ordinal emotion prediction. Finally, the proposed framework is validated on two speech corpora commonly used in affective computing, the RECOLA and the IEMOCAP databases, across a range of system configurations. The results consistently indicate that integrating relative ordinal information improves absolute ordinal emotion prediction.
- Oceania > Australia > New South Wales > Sydney (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > Oklahoma > Tulsa County > Tulsa (0.04)
- (7 more...)
- Information Technology > Artificial Intelligence > Cognitive Science > Emotion (0.70)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.49)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Support Vector Machines (0.47)
Is It Time to Redefine the Classification Task for Deep Neural Networks?
Deep neural networks (DNNs) is demonstrated to be vulnerable to the adversarial example, which is generated by adding small adversarial perturbation into the original legitimate example to cause the wrong outputs of DNNs. Nowadays, most works focus on the robustness of the deep model, while few works pay attention to the robustness of the learning task itself defined on DNNs. So we redefine this issue as the robustness of deep neural learning system. A deep neural learning system consists of the deep model and the learning task defined on the deep model. Moreover, the deep model is usually a deep neural network, involving the model architecture, data, training loss and training algorithm. We speculate that the vulnerability of the deep learning system also roots in the learning task itself. This paper defines the interval-label classification task for the deep classification system, whose labels are predefined non-overlapping intervals, instead of a fixed value (hard label) or probability vector (soft label). The experimental results demonstrate that the interval-label classification task is more robust than the traditional classification task while retaining accuracy.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- North America > United States > Nevada > Clark County > Las Vegas (0.04)
- (15 more...)
Exact Passive-Aggressive Algorithms for Learning to Rank Using Interval Labels
Manwani, Naresh, Chandra, Mohit
In this paper, we propose exact passive-aggressive (PA) online algorithms for learning to rank. The proposed algorithms can be used even when we have interval labels instead of actual labels for examples. The proposed algorithms solve a convex optimization problem at every trial. We find exact solution to those optimization problems to determine the updated parameters. We propose support class algorithm (SCA) which finds the active constraints using the KKT conditions of the optimization problems. These active constrains form support set which determines the set of thresholds that need to be updated. We derive update rules for PA, PA-I and PA-II. We show that the proposed algorithms maintain the ordering of the thresholds after every trial. We provide the mistake bounds of the proposed algorithms in both ideal and general settings. We also show experimentally that the proposed algorithms successfully learn accurate classifiers using interval labels as well as exact labels. Proposed algorithms also do well compared to other approaches.
- North America > United States > California (0.05)
- Asia > India > Telangana > Hyderabad (0.04)
- Oceania > Australia (0.04)