Traditional Methods Outperform Generative LLMs at Forecasting Credit Ratings
Drinkall, Felix, Pierrehumbert, Janet B., Zohren, Stefan
–arXiv.org Artificial Intelligence
Large Language Models (LLMs) have been shown to perform well for many downstream tasks. Transfer learning can enable LLMs to acquire skills that were not targeted during pre-training. In financial contexts, LLMs can sometimes beat well-established benchmarks. This paper investigates how well LLMs perform in the task of forecasting corporate credit ratings. We show that while LLMs are very good at encoding textual information, traditional methods are still very competitive when it comes to encoding numeric and multimodal data. For our task, current LLMs perform worse than a more traditional XGBoost architecture that combines fundamental and macroeconomic data with high-density text-based embedding features.
arXiv.org Artificial Intelligence
Jul-24-2024
- Country:
- Asia
- China (0.04)
- Middle East > Jordan (0.04)
- Europe > United Kingdom
- England > Oxfordshire > Oxford (0.14)
- North America > United States
- Minnesota > Hennepin County
- Minneapolis (0.14)
- New York > New York County
- New York City (0.04)
- Minnesota > Hennepin County
- Asia
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Banking & Finance > Credit (1.00)
- Government > Regional Government
- Law (1.00)
- Technology: