Relational Schemata in BERT Are Inducible, Not Emergent: A Study of Performance vs. Competence in Language Models
–arXiv.org Artificial Intelligence
While large language models like BERT demonstrate strong empirical performance on semantic tasks, whether this reflects true conceptual competence or surface-level statistical association remains unclear. I investigate whether BERT encodes abstract relational schemata by examining internal representations of concept pairs across taxonomic, mereological, and functional relations. I compare BERT's relational classification performance with representational structure in [CLS] token embeddings. Results reveal that pretrained BERT enables high classification accuracy, indicating latent relational signals. However, concept pairs organize by relation type in high-dimensional embedding space only after fine-tuning on supervised relation classification tasks. This indicates relational schemata are not emergent from pretraining alone but can be induced via task scaffolding. These findings demonstrate that behavioral performance does not necessarily imply structured conceptual understanding, though models can acquire inductive biases for grounded relational abstraction through appropriate training.
arXiv.org Artificial Intelligence
Jun-16-2025
- Country:
- Europe > Ukraine > Kyiv Oblast > Kyiv (0.04)
- Genre:
- Research Report > New Finding (0.88)
- Technology:
- Information Technology > Artificial Intelligence
- Cognitive Science > Problem Solving (0.88)
- Machine Learning
- Neural Networks (0.93)
- Performance Analysis > Accuracy (0.35)
- Statistical Learning (0.94)
- Natural Language (1.00)
- Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence