Modeling Professionalism in Expert Questioning through Linguistic Differentiation
D'Agostino, Giulia, Chen, Chung-Chi
–arXiv.org Artificial Intelligence
Professionalism is a crucial yet underexplored dimension of expert communication, particularly in high-stakes domains like finance. This paper investigates how linguistic features can be leveraged to model and evaluate professionalism in expert questioning. We introduce a novel annotation framework to quantify structural and pragmatic elements in financial analyst questions, such as discourse regulators, prefaces, and request types. Using both human-authored and large language model (LLM)-generated questions, we construct two datasets: one annotated for perceived professionalism and one labeled by question origin. We show that the same linguistic features correlate strongly with both human judgments and authorship origin, suggesting a shared stylistic foundation. Furthermore, a classifier trained solely on these interpretable features outperforms gemini-2.0 and SVM baselines in distinguishing expert-authored questions. Our findings demonstrate that professionalism is a learnable, domain-general construct that can be captured through linguistically grounded modeling.
arXiv.org Artificial Intelligence
Jul-29-2025
- Country:
- Asia
- Japan (0.04)
- South Korea (0.04)
- Europe
- Switzerland (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.05)
- Asia
- Genre:
- Research Report > New Finding (1.00)
- Technology: