Goto

Collaborating Authors

 ecoc


Error Correction Output Codes for Robust Neural Networks against Weight-errors: A Neural Tangent Kernel Point of View

Neural Information Processing Systems

Error correcting output code (ECOC) is a classic method that encodes binary classifiers to tackle the multi-class classification problem in decision trees and neural networks.Among ECOCs, the one-hot code has become the default choice in modern deep neural networks (DNNs) due to its simplicity in decision making. However, it suffers from a significant limitation in its ability to achieve high robust accuracy, particularly in the presence of weight errors. While recent studies have experimentally demonstrated that the non-one-hot ECOCs with multi-bits error correction ability, could be a better solution, there is a notable absence of theoretical foundations that can elucidate the relationship between codeword design, weight-error magnitude, and network characteristics, so as to provide robustness guarantees. This work is positioned to bridge this gap through the lens of neural tangent kernel (NTK). We have two important theoretical findings: 1) In clean models (without weight errors), utilizing one-hot code and non-one-hot ECOC is akin to altering decoding metrics from $l_2$ distance to Mahalanobis distance.



0cb929eae7a499e50248a3a78f7acfc7-AuthorFeedback.pdf

Neural Information Processing Systems

We appreciate positive and constructive comments, and address the main concerns raised by the reviewers below. Table 1: Accuracies [%] of baseline and proposed models with different meta-class set configurations on CUB-200. We will present more detailed results if our paper is accepted. Others [All] We will supplement the missing details and results in the final manuscript if our paper is accepted.



0cb929eae7a499e50248a3a78f7acfc7-AuthorFeedback.pdf

Neural Information Processing Systems

We appreciate positive and constructive comments, and address the main concerns raised by the reviewers below. Table 1: Accuracies [%] of baseline and proposed models with different meta-class set configurations on CUB-200. We will present more detailed results if our paper is accepted. Others [All] We will supplement the missing details and results in the final manuscript if our paper is accepted.


Improving Generalizability of Kolmogorov-Arnold Networks via Error-Correcting Output Codes

Lee, Youngjoon, Gong, Jinu, Kang, Joonhyuk

arXiv.org Artificial Intelligence

In this work, we integrate Error-Correcting Output Codes (ECOC) into the KAN framework to transform multi-class classification into multiple binary tasks, improving robustness via Hamming distance decoding. Our proposed KAN with ECOC framework outperforms vanilla KAN on a challenging blood cell classification dataset, achieving higher accuracy across diverse hyperparameter settings. Ablation studies further confirm that ECOC consistently enhances performance across FastKAN and FasterKAN variants. These results demonstrate that ECOC integration significantly boosts KAN generalizability in critical healthcare AI applications. T o the best of our knowledge, this is the first work of ECOC with KAN for enhancing multi-class medical image classification performance.


Error Correction Output Codes for Robust Neural Networks against Weight-errors: A Neural Tangent Kernel Point of View

Neural Information Processing Systems

Error correcting output code (ECOC) is a classic method that encodes binary classifiers to tackle the multi-class classification problem in decision trees and neural networks.Among ECOCs, the one-hot code has become the default choice in modern deep neural networks (DNNs) due to its simplicity in decision making. However, it suffers from a significant limitation in its ability to achieve high robust accuracy, particularly in the presence of weight errors. While recent studies have experimentally demonstrated that the non-one-hot ECOCs with multi-bits error correction ability, could be a better solution, there is a notable absence of theoretical foundations that can elucidate the relationship between codeword design, weight-error magnitude, and network characteristics, so as to provide robustness guarantees. This work is positioned to bridge this gap through the lens of neural tangent kernel (NTK). We have two important theoretical findings: 1) In clean models (without weight errors), utilizing one-hot code and non-one-hot ECOC is akin to altering decoding metrics from l_2 distance to Mahalanobis distance.


Reviews: Combinatorial Inference against Label Noise

Neural Information Processing Systems

The combinatorial (meta- or super-class) idea is interesting: it is reasonable and one easily expects to work well. In terms of related work, I suggest add 2 related papers. One is ECOC (Solving Multiclass Learning Problems via Error-Correcting Output Codes, JAIR 1995), which is a classic combinatorial method for classification. The other one is PENCIL (Probabilistic End-to-end Noise Correction for Learning with Noisy Labels, CVPR 2019), which is a novel noise handling method. With regard to the method, the proposed probabilistic way to decipher class from meta-class is simple.


Multiclass Learning Approaches: A Theoretical Comparison with Implications

Neural Information Processing Systems

We theoretically analyze and compare the following five popular multiclass classification methods: One vs. All, All Pairs, Tree-based classifiers, Error Correcting Output Codes (ECOC) with randomly generated code matrices, and Multiclass SVM. In the first four methods, the classification is based on a reduction to binary classification.


Class Binarization to NeuroEvolution for Multiclass Classification

Lan, Gongjin, Gao, Zhenyu, Tong, Lingyao, Liu, Ting

arXiv.org Artificial Intelligence

Multiclass classification is a fundamental and challenging task in machine learning. The existing techniques of multiclass classification can be categorized as (i) decomposition into binary (ii) extension from binary and (iii) hierarchical classification. Decomposing multiclass classification into a set of binary classifications that can be efficiently solved by using binary classifiers, called class binarization, which is a popular technique for multiclass classification. Neuroevolution, a general and powerful technique for evolving the structure and weights of neural networks, has been successfully applied to binary classification. In this paper, we apply class binarization techniques to a neuroevolution algorithm, NeuroEvolution of Augmenting Topologies (NEAT), that is used to generate neural networks for multiclass classification. We propose a new method that applies Error-Correcting Output Codes (ECOC) to design the class binarization strategies on the neuroevolution for multiclass classification. The ECOC strategies are compared with the class binarization strategies of One-vs-One and One-vs-All on three well-known datasets Digit, Satellite, and Ecoli. We analyse their performance from four aspects of multiclass classification degradation, accuracy, evolutionary efficiency, and robustness. The results show that the NEAT with ECOC performs high accuracy with low variance. Specifically, it shows significant benefits in a flexible number of binary classifiers and strong robustness.