trading system
Integration of LSTM Networks in Random Forest Algorithms for Stock Market Trading Predictions
The aim of this paper is the analysis and selection of stock trading systems that combine different models with data of different nature, such as financial and microeconomic information. Specifically, based on previous work by the authors and applying advanced techniques of Machine Learning and Deep Learning, our objective is to formulate trading algorithms for the stock market with empirically tested statistical advantages, thus improving results published in the literature. Our approach integrates Long Short-Term Memory (LSTM) networks with algorithms based on decision trees, such as Random Forest and Gradient Boosting. While the former analyze price patterns of financial assets, the latter are fed with economic data of companies. Numerical simulations of algorithmic trading with data from international companies and 10-weekday predictions confirm that an approach based on both fundamental and technical variables can outperform the usual approaches, which do not combine those two types of variables. In doing so, Random Forest turned out to be the best performer among the decision trees. We also discuss how the prediction performance of such a hybrid approach can be boosted by selecting the technical variables.
- Europe > Spain (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > Germany (0.04)
- (3 more...)
- Research Report > Experimental Study (1.00)
- Financial News (0.95)
Hi-DARTS: Hierarchical Dynamically Adapting Reinforcement Trading System
Sagong, Hoon, Kim, Heesu, Hong, Hanbeen
Personal use of this material is permitted. Abstract--Conventional autonomous trading systems struggle to balance computational efficiency and market responsiveness due to their fixed operating frequency. We propose Hi-DARTS, a hierarchical multi-agent reinforcement learning framework that addresses this trade-off. Hi-DARTS utilizes a meta-agent to analyze market volatility and dynamically activate specialized Time Frame Agents for high-frequency or low-frequency trading as needed. During back-testing on AAPL stock from January 2024 to May 2025, Hi-DARTS yielded a cumulative return of 25.17% with a Sharpe Ratio of 0.75. Our work demonstrates that dynamic, hierarchical agents can achieve superior risk-adjusted returns while maintaining high computational efficiency.
A Deep Reinforcement Learning Approach to Automated Stock Trading, using xLSTM Networks
Sarlakifar, Faezeh, Asl, Mohammadreza Mohammadzadeh, Khaledi, Sajjad Rezvani, Salimi-Badr, Armin
Traditional Long Short-Term Memory (LSTM) networks are effective for handling sequential data but have limitations such as gradient vanishing and difficulty in capturing long-term dependencies, which can impact their performance in dynamic and risky environments like stock trading. To address these limitations, this study explores the usage of the newly introduced Extended Long Short Term Memory (xLSTM) network in combination with a deep reinforcement learning (DRL) approach for automated stock trading. Our proposed method utilizes xLSTM networks in both actor and critic components, enabling effective handling of time series data and dynamic market environments. Proximal Policy Optimization (PPO), with its ability to balance exploration and exploitation, is employed to optimize the trading strategy. Experiments were conducted using financial data from major tech companies over a comprehensive timeline, demonstrating that the xLSTM-based model outperforms LSTM-based methods in key trading evaluation metrics, including cumulative return, average profitability per trade, maximum earning rate, maximum pullback, and Sharpe ratio. These findings mark the potential of xLSTM for enhancing DRL-based stock trading systems.
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.46)
The Evolution of Reinforcement Learning in Quantitative Finance
Pippas, Nikolaos, Turkay, Cagatay, Ludvig, Elliot A.
Reinforcement Learning (RL) has experienced significant advancement over the past decade, prompting a growing interest in applications within finance. This survey critically evaluates 167 publications, exploring diverse RL applications and frameworks in finance. Financial markets, marked by their complexity, multi-agent nature, information asymmetry, and inherent randomness, serve as an intriguing test-bed for RL. Traditional finance offers certain solutions, and RL advances these with a more dynamic approach, incorporating machine learning methods, including transfer learning, meta-learning, and multi-agent solutions. This survey dissects key RL components through the lens of Quantitative Finance. We uncover emerging themes, propose areas for future research, and critique the strengths and weaknesses of existing methods.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.27)
- Europe > United Kingdom > England > West Midlands > Coventry (0.04)
- North America > United States > New York (0.04)
- (11 more...)
- Research Report (1.00)
- Overview (1.00)
- Instructional Material (0.92)
- Information Technology (1.00)
- Banking & Finance > Trading (1.00)
- Banking & Finance > Economy (1.00)
- Leisure & Entertainment > Games (0.67)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Fuzzy Logic (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- (2 more...)
Reinforcement Learning for Trading
In this paper, we propose to use recurrent reinforcement learning to directly optimize such trading system performance functions, and we compare two differ(cid:173) ent reinforcement learning methods. The first, Recurrent Reinforcement Learning, uses immediate rewards to train the trading systems, while the second (Q-Learning (Watkins 1989)) approximates discounted future rewards. These methodologies can be applied to optimizing systems designed to trade a single security or to trade port(cid:173) folios . In addition, we propose a novel value function for risk-adjusted return that enables learning to be done online: the differential Sharpe ratio. Trading system profits depend upon sequences of interdependent decisions, and are thus path-dependent. Optimal trading decisions when the effects of transactions costs, market impact and taxes are included require knowledge of the current system state. In Moody, Wu, Liao & Saffell (1998), we demonstrate that reinforcement learning provides a more elegant and effective means for training trading systems when transaction costs are included, than do more standard supervised approaches.
A Framework for Empowering Reinforcement Learning Agents with Causal Analysis: Enhancing Automated Cryptocurrency Trading
Amirzadeh, Rasoul, Thiruvady, Dhananjay, Nazari, Asef, Ee, Mong Shan
Despite advances in artificial intelligence-enhanced trading methods, developing a profitable automated trading system remains challenging in the rapidly evolving cryptocurrency market. This study aims to address these challenges by developing a reinforcement learning-based automated trading system for five popular altcoins~(cryptocurrencies other than Bitcoin): Binance Coin, Ethereum, Litecoin, Ripple, and Tether. To this end, we present CausalReinforceNet, a framework framed as a decision support system. Designed as the foundational architecture of the trading system, the CausalReinforceNet framework enhances the capabilities of the reinforcement learning agent through causal analysis. Within this framework, we use Bayesian networks in the feature engineering process to identify the most relevant features with causal relationships that influence cryptocurrency price movements. Additionally, we incorporate probabilistic price direction signals from dynamic Bayesian networks to enhance our reinforcement learning agent's decision-making. Due to the high volatility of the cryptocurrency market, we design our framework to adopt a conservative approach that limits sell and buy position sizes to manage risk. We develop two agents using the CausalReinforceNet framework, each based on distinct reinforcement learning algorithms. The results indicate that our framework substantially surpasses the Buy-and-Hold benchmark strategy in profitability. Additionally, both agents generated notable returns on investment for Binance Coin and Ethereum.
- North America > United States > Texas (0.14)
- Oceania > Australia (0.04)
- North America > United States > District of Columbia > Washington (0.04)
- (3 more...)
- Research Report (1.00)
- Overview (1.00)
- Information Technology > e-Commerce > Financial Technology (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.93)
Stock Market Sentiment Classification and Backtesting via Fine-tuned BERT
With the rapid development of big data and computing devices, low-latency automatic trading platforms based on real-time information acquisition have become the main components of the stock trading market, so the topic of quantitative trading has received widespread attention. And for non-strongly efficient trading markets, human emotions and expectations always dominate market trends and trading decisions. Therefore, this paper starts from the theory of emotion, taking East Money as an example, crawling user comment titles data from its corresponding stock bar and performing data cleaning. Subsequently, a natural language processing model BERT was constructed, and the BERT model was fine-tuned using existing annotated data sets. The experimental results show that the fine-tuned model has different degrees of performance improvement compared to the original model and the baseline model. Subsequently, based on the above model, the user comment data crawled is labeled with emotional polarity, and the obtained label information is combined with the Alpha191 model to participate in regression, and significant regression results are obtained. Subsequently, the regression model is used to predict the average price change for the next five days, and use it as a signal to guide automatic trading. The experimental results show that the incorporation of emotional factors increased the return rate by 73.8\% compared to the baseline during the trading period, and by 32.41\% compared to the original alpha191 model. Finally, we discuss the advantages and disadvantages of incorporating emotional factors into quantitative trading, and give possible directions for further research in the future.
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Information Extraction (0.88)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.88)
- (3 more...)
Quantitative Trading using Deep Q Learning
Reinforcement learning (RL) is a branch of machine learning that has been used in a variety of applications such as robotics, game playing, and autonomous systems. In recent years, there has been growing interest in applying RL to quantitative trading, where the goal is to make profitable trades in financial markets. This paper explores the use of RL in quantitative trading and presents a case study of a RL-based trading algorithm. The results show that RL can be a powerful tool for quantitative trading, and that it has the potential to outperform traditional trading algorithms. The use of reinforcement learning in quantitative trading represents a promising area of research that can potentially lead to the development of more sophisticated and effective trading systems. Future work could explore the use of alternative reinforcement learning algorithms, incorporate additional data sources, and test the system on different asset classes. Overall, our research demonstrates the potential of using reinforcement learning in quantitative trading and highlights the importance of continued research and development in this area. By developing more sophisticated and effective trading systems, we can potentially improve the efficiency of financial markets and generate greater returns for investors.
- Banking & Finance > Trading (1.00)
- Energy > Oil & Gas > Upstream (0.48)
Application of supervised learning models in the Chinese futures market
Global financial systems have seen considerable growth in size, concentration, and complexity over the past few decades, the complexity of financial systems exceeds the modelling capabilities of traditional quantitative methods. In addition, some very useful data sets, such as satellite images, voice recordings or news sentiment, are beyond the reach of econometrics[2]. In recent years, many hedge funds have started experimenting with machine learning (ML) methods. ML is a subset of artificial intelligence, where machines are used to learn from previous experience[3]. Unlike traditional programming, where developers need to predict every potential condition to program, ML's solution can effectively tailor the output to the data.
- Asia > China > Shanghai > Shanghai (0.06)
- South America > Argentina > Patagonia > Río Negro Province > Viedma (0.04)
- North America > United States (0.04)
- (2 more...)
- Banking & Finance > Trading (1.00)
- Materials > Chemicals > Commodity Chemicals > Petrochemicals > Polymers & Plastics (0.68)
Under development - Spxbot Blog
The new trainer has taken shape: it should provide a better preprocessing of the model and has just been tested. Now it will be applied to the basic SPX data to have each bar analized and ranked. More testing ahead, it is the basic process for a neural network model. This new learning model is much more rich than the previous, and I hope it will enhance the performance. The stop is the key factor for every position and is the value at which your position will be closed.