When Less is More: On the Value of "Co-training" for Semi-Supervised Software Defect Predictors
Majumder, Suvodeep, Chakraborty, Joymallya, Menzies, Tim
–arXiv.org Artificial Intelligence
Labeling a module defective or non-defective is an expensive task. Hence, there are often limits on how much-labeled data is available for training. Semi-supervised classifiers use far fewer labels for training models, but there are numerous semi-supervised methods, including self-labeling, co-training, maximal-margin, and graph-based methods, to name a few. Only a handful of these methods have been tested in SE for (e.g.) predicting defects and even that, those tests have been on just a handful of projects. This paper takes a wide range of 55 semi-supervised learners and applies these to over 714 projects. We find that semi-supervised "co-training methods" work significantly better than other approaches. However, co-training needs to be used with caution since the specific choice of co-training methods needs to be carefully selected based on a user's specific goals. Also, we warn that a commonly-used co-training method ("multi-view"-- where different learners get different sets of columns) does not improve predictions (while adding too much to the run time costs 11 hours vs. 1.8 hours). Those cautions stated, we find using these "co-trainers," we can label just 2.5% of data, then make predictions that are competitive to those using 100% of the data. It is an open question worthy of future work to test if these reductions can be seen in other areas of software analytics. All the codes used and datasets analyzed during the current study are available in the https://GitHub.com/Suvodeep90/Semi_Supervised_Methods.
arXiv.org Artificial Intelligence
Nov-10-2022
- Country:
- North America > United States (0.28)
- Genre:
- Research Report
- Experimental Study (0.68)
- New Finding (1.00)
- Workflow (0.92)
- Research Report
- Industry:
- Education (0.66)
- Health & Medicine (0.46)
- Technology:
- Information Technology
- Artificial Intelligence
- Machine Learning
- Evolutionary Systems (0.67)
- Learning Graphical Models > Directed Networks
- Bayesian Learning (0.45)
- Neural Networks (0.68)
- Performance Analysis > Accuracy (0.68)
- Statistical Learning > Support Vector Machines (0.46)
- Natural Language (1.00)
- Representation & Reasoning (1.00)
- Machine Learning
- Data Science > Data Mining (1.00)
- Information Management (0.94)
- Software (1.00)
- Software Engineering (1.00)
- Artificial Intelligence
- Information Technology