Financial News Analytics Using Fine-Tuned Llama 2 GPT Model
–arXiv.org Artificial Intelligence
Large language models (LLM), based on generative pre-trained transformers (GPT), such as ChatGPT show high efficiency in the analysis of complex texts. These days, we can observe the emerging of many new smaller open source LLMs, e.g. Llama, Falcon, GPT4All, GPT-J, etc. Open source LLMs can be fine-tuned for specific custom problems and deployed on custom servers, e.g. in cloud computing services such as AWS, GCP. LLMs have some new features as compared to conventional language models based on transformers. One of them is zero-shot and few-shot learning, which consists in good performance of the model when we show it only few training examples or even no examples at all, but only the instructions describing what should be done. Another important feature is the reasoning when a model can generate new patterns and conclusions which are based on an input prompt and facts known by the model and which were not included into it directly during a training process. So, the model can generate analytical texts with unexpected but useful chains of thoughts. One of the approaches of using LLMs is based on retrieval augmented generation (RAG), which uses the results from other services e.g.
arXiv.org Artificial Intelligence
Sep-11-2023
- Country:
- Asia
- Europe (0.04)
- North America > United States
- California (0.04)
- New York > New York County
- New York City (0.05)
- Rhode Island > Providence County
- Pawtucket (0.04)
- Tennessee (0.04)
- Genre:
- Financial News (1.00)
- Research Report (0.64)
- Industry:
- Automobiles & Trucks (1.00)
- Banking & Finance
- Government > Regional Government
- Information Technology (1.00)
- Leisure & Entertainment (1.00)
- Media (1.00)
- Telecommunications (1.00)
- Technology: