Goto

Collaborating Authors

 adasyn


Finding the Sweet Spot: Optimal Data Augmentation Ratio for Imbalanced Credit Scoring Using ADASYN

Chia, Luis H.

arXiv.org Artificial Intelligence

Credit scoring models face a critical challenge: severe class imbalance, with default rates typically below 10%, which hampers model learning and predictive performance. While synthetic data augmentation techniques such as SMOTE and ADASYN have been proposed to address this issue, the optimal augmentation ratio remains unclear, with practitioners often defaulting to full balancing (1:1 ratio) without empirical justification. This study systematically evaluates 10 data augmentation scenarios using the Give Me Some Credit dataset (97,243 observations, 7% default rate), comparing SMOTE, BorderlineSMOTE, and ADASYN at different multiplication factors (1x, 2x, 3x). All models were trained using XGBoost and evaluated on a held-out test set of 29,173 real observations. Statistical significance was assessed using bootstrap testing with 1,000 iterations. Key findings reveal that ADASYN with 1x multiplication (doubling the minority class) achieved optimal performance with AUC of 0.6778 and Gini coefficient of 0.3557, representing statistically significant improvements of +0.77% and +3.00% respectively (p = 0.017, bootstrap test). Higher multiplication factors (2x and 3x) resulted in performance degradation, with 3x showing a -0.48% decrease in AUC, suggesting a "law of diminishing returns" for synthetic oversampling. The optimal class imbalance ratio was found to be 6.6:1 (majority:minority), contradicting the common practice of balancing to 1:1. This work provides the first empirical evidence of an optimal "sweet spot" for data augmentation in credit scoring, with practical guidelines for industry practitioners and researchers working with imbalanced datasets. While demonstrated on a single representative dataset, the methodology provides a reproducible framework for determining optimal augmentation ratios in other imbalanced domains.


Enhancing Metabolic Syndrome Prediction with Hybrid Data Balancing and Counterfactuals

Shah, Sanyam Paresh, Mamun, Abdullah, Soumma, Shovito Barua, Ghasemzadeh, Hassan

arXiv.org Artificial Intelligence

Metabolic Syndrome (MetS) is a cluster of interrelated risk factors that significantly increases the risk of cardiovascular diseases and type 2 diabetes. Despite its global prevalence, accurate prediction of MetS remains challenging due to issues such as class imbalance, data scarcity, and methodological inconsistencies in existing studies. In this paper, we address these challenges by systematically evaluating and optimizing machine learning (ML) models for MetS prediction, leveraging advanced data balancing techniques and counterfactual analysis. Multiple ML models, including XGBoost, Random Forest, TabNet, etc., were trained and compared under various data balancing techniques such as random oversampling (ROS), SMOTE, ADASYN, and CTGAN. Additionally, we introduce MetaBoost, a novel hybrid framework that integrates SMOTE, ADASYN, and CTGAN, optimizing synthetic data generation through weighted averaging and iterative weight tuning to enhance the model's performance (achieving up to a 1.87% accuracy improvement over individual balancing techniques). A comprehensive counterfactual analysis is conducted to quantify the feature-level changes required to shift individuals from high-risk to low-risk categories. The results indicate that blood glucose (50.3%) and triglycerides (46.7%) were the most frequently modified features, highlighting their clinical significance in MetS risk reduction. Additionally, probabilistic analysis shows elevated blood glucose (85.5% likelihood) and triglycerides (74.9% posterior probability) as the strongest predictors. This study not only advances the methodological rigor of MetS prediction but also provides actionable insights for clinicians and researchers, highlighting the potential of ML in mitigating the public health burden of metabolic syndrome.


Detecting LGBTQ+ Instances of Cyberbullying

Arslan, Muhammad, Madrigal, Manuel Sandoval, Abuhamad, Mohammed, Hall, Deborah L., Silva, Yasin N.

arXiv.org Artificial Intelligence

Social media continues to have an impact on the trajectory of humanity. However, its introduction has also weaponized keyboards, allowing the abusive language normally reserved for in-person bullying to jump onto the screen, i.e., cyberbullying. Cyberbullying poses a significant threat to adolescents globally, affecting the mental health and well-being of many. A group that is particularly at risk is the LGBTQ+ community, as researchers have uncovered a strong correlation between identifying as LGBTQ+ and suffering from greater online harassment. Therefore, it is critical to develop machine learning models that can accurately discern cyberbullying incidents as they happen to LGBTQ+ members. The aim of this study is to compare the efficacy of several transformer models in identifying cyberbullying targeting LGBTQ+ individuals. We seek to determine the relative merits and demerits of these existing methods in addressing complex and subtle kinds of cyberbullying by assessing their effectiveness with real social media data.


Time Series Data Augmentation as an Imbalanced Learning Problem

Cerqueira, Vitor, Moniz, Nuno, Inácio, Ricardo, Soares, Carlos

arXiv.org Machine Learning

Recent state-of-the-art forecasting methods are trained on collections of time series. These methods, often referred to as global models, can capture common patterns in different time series to improve their generalization performance. However, they require large amounts of data that might not be readily available. Besides this, global models sometimes fail to capture relevant patterns unique to a particular time series. In these cases, data augmentation can be useful to increase the sample size of time series datasets. The main contribution of this work is a novel method for generating univariate time series synthetic samples. Our approach stems from the insight that the observations concerning a particular time series of interest represent only a small fraction of all observations. In this context, we frame the problem of training a forecasting model as an imbalanced learning task. Oversampling strategies are popular approaches used to deal with the imbalance problem in machine learning. We use these techniques to create synthetic time series observations and improve the accuracy of forecasting models. We carried out experiments using 7 different databases that contain a total of 5502 univariate time series. We found that the proposed solution outperforms both a global and a local model, thus providing a better trade-off between these two approaches.


Evaluating the Utility of GAN Generated Synthetic Tabular Data for Class Balancing and Low Resource Settings

Chereddy, Nagarjuna, Bolla, Bharath Kumar

arXiv.org Artificial Intelligence

The present study aimed to address the issue of imbalanced data in classification tasks and evaluated the suitability of SMOTE, ADASYN, and GAN techniques in generating synthetic data to address the class imbalance and improve the performance of classification models in low-resource settings. The study employed the Generalised Linear Model (GLM) algorithm for class balancing experiments and the Random Forest (RF) algorithm for low-resource setting experiments to assess model performance under varying training data. The recall metric was the primary evaluation metric for all classification models. The results of the class balancing experiments showed that the GLM model trained on GAN-balanced data achieved the highest recall value. Similarly, in low-resource experiments, models trained on data enhanced with GAN-synthesized data exhibited better recall values than original data. These findings demonstrate the potential of GAN-generated synthetic data for addressing the challenge of imbalanced data in classification tasks and improving model performance in low-resource settings.


The Mystery of ADASYN is Revealed

#artificialintelligence

This research assumes that you are familiar with class imbalance and the ADASYN algorithm. We strongly encourage our readers to review the conference article that launched ADASYN (just type that into Google Scholar or see the References section of this document), and then read any number of articles in Towards Data Science that discuss class imbalance and ADASYN. Because this is neither a guide nor an overview; it is voyage into uncharted waters with startling discoveries. The answers are 1) surprising, 2) fascinating, and 3) extraordinary, in that order. All models in this research were conducted using the RandomForest and LogisticRegression algorithms in the sci-kit learn library to gain information about both tree and linear structures, respectively. All predictive models were 10-fold cross-validated with stratified sampling using "stratify y" in train_test_split and "cv 10" in GridSearchCV.


Imbalanced-learn: Handling imbalanced class problem

#artificialintelligence

In the previous article here, we have gone through the different methods to deal with imbalanced data. In this article, let us try to understand how to use imbalanced-learn library to deal with imbalanced class problems. We will make use of Pycaret library and UCI's default of credit card client dataset which is also in-built into PyCaret. Imbalanced-learn is a python package that provides a number of re-sampling techniques to deal with class imbalance problems commonly encountered in classification tasks. Note that imbalanced-learn is compatible with scikit-learn and is also part of scikit-learn-contrib projects.


A Method for Handling Multi-class Imbalanced Data by Geometry based Information Sampling and Class Prioritized Synthetic Data Generation (GICaPS)

Majumder, Anima, Dutta, Samrat, Kumar, Swagat, Behera, Laxmidhar

arXiv.org Machine Learning

This paper looks into the problem of handling imbalanced data in a multi-label classification problem. The problem is solved by proposing two novel methods that primarily exploit the geometric relationship between the feature vectors. The first one is an undersampling algorithm that uses angle between feature vectors to select more informative samples while rejecting the less informative ones. A suitable criterion is proposed to define the informativeness of a given sample. The second one is an oversampling algorithm that uses a generative algorithm to create new synthetic data that respects all class boundaries. This is achieved by finding \emph{no man's land} based on Euclidean distance between the feature vectors. The efficacy of the proposed methods is analyzed by solving a generic multi-class recognition problem based on mixture of Gaussians. The superiority of the proposed algorithms is established through comparison with other state-of-the-art methods, including SMOTE and ADASYN, over ten different publicly available datasets exhibiting high-to-extreme data imbalance. These two methods are combined into a single data processing framework and is labeled as ``GICaPS'' to highlight the role of geometry-based information (GI) sampling and Class-Prioritized Synthesis (CaPS) in dealing with multi-class data imbalance problem, thereby making a novel contribution in this field.


Handling Imbalanced Data: A Case Study for Binary Class Problems

Danquah, Richmond Addo

arXiv.org Artificial Intelligence

For several years till date, the major issues in terms of solving for classification problems are the issues of Imbalanced data. Because majority of the machine learning algorithms by default assumes all data are balanced, the algorithms do not take into consideration the distribution of the data sample class. The results tend to be unsatisfactory and skewed towards the majority sample class distribution. This implies that the consequences as a result of using a model built using an Imbalanced data without handling for the Imbalance in the data could be misleading both in practice and theory. Most researchers have focused on the application of Synthetic Minority Oversampling Technique (SMOTE) and Adaptive Synthetic (ADASYN) Sampling Approach in handling data Imbalance independently in their works and have failed to better explain the algorithms behind these techniques with computed examples. This paper focuses on both synthetic oversampling techniques and manually computes synthetic data points to enhance easy comprehension of the algorithms. We analyze the application of these synthetic oversampling techniques on binary classification problems with different Imbalanced ratios and sample sizes.


Improved Preterm Prediction Based on Optimized Synthetic Sampling of EHG Signal

Xu, Jinshan, Chen, Zhenqin, Lu, Yanpei, Yang, Xi, Pumir, Alain

arXiv.org Machine Learning

Preterm labor is the leading cause of neonatal morbidity and mortality and has attracted research efforts from many scientific areas. The inter-relationship between uterine contraction and the underlying electrical activities makes uterine electrohysterogram (EHG) a promising direction for preterm detection and prediction. Due the scarcity of EHG signals, especially those of preterm patients, synthetic algorithms are applied to create artificial samples of preterm type in order to remove prediction bias towards term, at the expense of a reduction of the feature effectiveness in machine-learning based automatic preterm detecting. To address such problem, we quantify the effect of synthetic samples (balance coefficient) on features' effectiveness, and form a general performance metric by utilizing multiple feature scores with relevant weights that describe their contributions to class separation. Combined with the activation/inactivation functions that characterizes the effect of the abundance of training samples in term and preterm prediction precision, we obtain an optimal sample balance coefficient that compromise the effect of synthetic samples in removing bias towards the majority and the side-effect of reducing features' importance. Substantial improvement in prediction precision has been achieved through a set of numerical tests on public available TPEHG database, and it verifies the effectiveness of the proposed method.