Evaluating Out-of-Distribution Performance on Document Image Classifiers 2
–Neural Information Processing Systems
The ability of a document classifier to handle inputs that are drawn from a distribution different from the training distribution is crucial for robust deployment and generalizability. The RVL-CDIP corpus [18] is the de facto standard benchmark for document classification, yet to our knowledge all studies that use this corpus do not include evaluation on out-of-distribution documents. In this paper, we curate and release a new out-of-distribution benchmark for evaluating out-of-distribution performance for document classifiers. Our new out-of-distribution benchmark consists of two types of documents: those that are not part of any of the 16 indomain RVL-CDIP categories (RVL-CDIP-O), and those that are one of the 16 in-domain categories yet are drawn from a distribution different from that of the original RVL-CDIP dataset (RVL-CDIP-N). While prior work on document classification for in-domain RVL-CDIP documents reports high accuracy scores, we find that these models exhibit accuracy drops of between roughly 15-30% on our new out-of-domain RVL-CDIP-N benchmark, and further struggle to distinguish between in-domain RVL-CDIP-N and out-of-domain RVL-CDIP-O inputs. Our new benchmark provides researchers with a valuable new resource for analyzing out-ofdistribution performance on document classifiers.
Neural Information Processing Systems
Mar-21-2025, 14:10:30 GMT
- Country:
- North America > United States (0.68)
- Industry:
- Law (0.46)
- Technology:
- Information Technology
- Artificial Intelligence
- Machine Learning
- Neural Networks > Deep Learning (0.68)
- Performance Analysis > Accuracy (1.00)
- Natural Language > Text Classification (0.70)
- Vision (1.00)
- Machine Learning
- Data Science > Data Mining (1.00)
- Artificial Intelligence
- Information Technology