Financial News Analytics Using Fine-Tuned Llama 2 GPT Model

Pavlyshenko, Bohdan M.

arXiv.org Artificial Intelligence 

Large language models (LLM), based on generative pre-trained transformers (GPT), such as ChatGPT show high efficiency in the analysis of complex texts. These days, we can observe the emerging of many new smaller open source LLMs, e.g. Llama, Falcon, GPT4All, GPT-J, etc. Open source LLMs can be fine-tuned for specific custom problems and deployed on custom servers, e.g. in cloud computing services such as AWS, GCP. LLMs have some new features as compared to conventional language models based on transformers. One of them is zero-shot and few-shot learning, which consists in good performance of the model when we show it only few training examples or even no examples at all, but only the instructions describing what should be done. Another important feature is the reasoning when a model can generate new patterns and conclusions which are based on an input prompt and facts known by the model and which were not included into it directly during a training process. So, the model can generate analytical texts with unexpected but useful chains of thoughts. One of the approaches of using LLMs is based on retrieval augmented generation (RAG), which uses the results from other services e.g.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found