Pre-trained language models as knowledge bases for Automotive Complaint Analysis

Viellieber, V. D., Aßenmacher, M.

arXiv.org Machine Learning 

Recently researchers developed some interest in the knowledge stored in the large pre-trained models. Petroni et al. (2019) investigated BERT (Devlin et al., 2018) and other architectures with respect to their ability of storing commonsense factual knowledge. As the stored knowledge depends heavily on the pre-training corpus, we are curious about whether one can "teach" these kinds of models further knowledge by exposing them to texts from specific domains, like customer complaints in the automotive industry. Especially for product-driven organizations as car manufacturers, customer feedback provides a precious source of information for product improvements, e.g. in terms of potential security risks identified and mentioned by customers. However, the structured use of this data is an open problem in industry, despite numerous investigations with advanced NLP methods (Choe et al., 2013; Lee et al., 2015; Akella et al., 2017; Liang et al., 2017; Joung et al., 2019). Handling this fuzzy data and satisfying the demand for detailed information extraction in an intelligent manner remains challenging. The recent developments in NLP lead us to the idea of evaluating the ability of pre-trained language models to act as a domain-specific knowledge base. We investigate if a language model, further pre-trained on customer feedback, is able to store customer opinions about products, features, and services as knowledge in model parameters.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found