RecGPT: Generative Pre-training for Text-based Recommendation
–arXiv.org Artificial Intelligence
We present the first domain-adapted and fully-trained large language model, RecGPT-7B, and its instruction-following variant, RecGPT-7B-Instruct, for text-based recommendation. Experimental results on rating prediction and sequential recommendation tasks show that our model, RecGPT-7B-Instruct, outperforms previous strong baselines. We are releasing our RecGPT models as well as their pre-training and fine-tuning datasets to facilitate future research and downstream applications in text-based recommendation. Public "huggingface" links to our RecGPT models and datasets are available at: https://github.com/VinAIResearch/RecGPT
arXiv.org Artificial Intelligence
May-21-2024
- Country:
- North America > Puerto Rico (0.14)
- Genre:
- Research Report (0.40)
- Industry:
- Information Technology (0.47)
- Leisure & Entertainment (0.69)