On Affine Homotopy between Language Encoders Robin S. M. Chan
–Neural Information Processing Systems
Pre-trained language encoders--functions that represent text as vectors--are an integral component of many NLP tasks. We tackle a natural question in language encoder analysis: What does it mean for two encoders to be similar? We contend that a faithful measure of similarity needs to be intrinsic, that is, task-independent, yet still be informative of extrinsic similarity--the performance on downstream tasks. It is common to consider two encoders similar if they are homotopic, i.e., if they can be aligned through some transformation.
Neural Information Processing Systems
May-30-2025, 20:07:11 GMT
- Country:
- Europe (0.67)
- North America > United States
- Minnesota > Hennepin County > Minneapolis (0.14)
- Genre:
- Research Report > Experimental Study (1.00)
- Technology:
- Information Technology
- Artificial Intelligence
- Machine Learning
- Neural Networks (0.93)
- Statistical Learning (1.00)
- Natural Language (1.00)
- Representation & Reasoning (1.00)
- Machine Learning
- Communications (0.93)
- Data Science (1.00)
- Artificial Intelligence
- Information Technology