Investigating Cross-Domain Behaviors of BERT in Review Understanding
–arXiv.org Artificial Intelligence
Review score prediction requires review text understanding, a critical real-world application of natural language processing. Due to dissimilar text domains in product reviews, a common practice is fine-tuning BERT models upon reviews of differing domains. However, there has not yet been an empirical study of cross-domain behaviors of BERT models in the various tasks of product review understanding. In this project, we investigate text classification BERT models fine-tuned on single-domain and multi-domain Amazon review data. In our findings, though single-domain models achieved marginally improved performance on their corresponding domain compared to multi-domain models, multi-domain models outperformed single-domain models when evaluated on multi-domain data, single-domain data the single-domain model was not fine-tuned on, and on average when considering all tests. Though slight increases in accuracy can be achieved through single-domain model fine-tuning, computational resources and costs can be reduced by utilizing multi-domain models that perform well across domains.
arXiv.org Artificial Intelligence
Jun-27-2023
- Country:
- North America > United States (0.14)
- Genre:
- Research Report > New Finding (0.66)
- Industry:
- Leisure & Entertainment (0.93)
- Media > Music (0.68)
- Technology: