hft
Harmonic fractal transformation for modeling complex neuronal effects: from bursting and noise shaping to waveform sensitivity and noise-induced subthreshold spiking
We propose the first fractal frequency mapping, which in a simple form enables to replicate complex neuronal effects. Unlike the conventional filters, which suppress or amplify the input spectral components according to the filter weights, the transformation excites novel components by a fractal recomposition of the input spectra resulting in a formation of spikes at resonant frequencies that are optimal for sampling. This enables high sensitivity detection, robustness to noise and noise-induced signal amplification. The proposed model illustrates that a neuronal functionality can be viewed as a linear summation of spectrum over nonlinearly transformed frequency domain.
Think Only When You Need with Large Hybrid-Reasoning Models
Jiang, Lingjie, Wu, Xun, Huang, Shaohan, Dong, Qingxiu, Chi, Zewen, Dong, Li, Zhang, Xingxing, Lv, Tengchao, Cui, Lei, Wei, Furu
Recent Large Reasoning Models (LRMs) have shown substantially improved reasoning capabilities over traditional Large Language Models (LLMs) by incorporating extended thinking processes prior to producing final responses. However, excessively lengthy thinking introduces substantial overhead in terms of token consumption and latency, which is particularly unnecessary for simple queries. In this work, we introduce Large Hybrid-Reasoning Models (LHRMs), the first kind of model capable of adaptively determining whether to perform thinking based on the contextual information of user queries. To achieve this, we propose a two-stage training pipeline comprising Hybrid Fine-Tuning (HFT) as a cold start, followed by online reinforcement learning with the proposed Hybrid Group Policy Optimization (HGPO) to implicitly learn to select the appropriate thinking mode. Furthermore, we introduce a metric called Hybrid Accuracy to quantitatively assess the model's capability for hybrid thinking. Extensive experimental results show that LHRMs can adaptively perform hybrid thinking on queries of varying difficulty and type. It outperforms existing LRMs and LLMs in reasoning and general capabilities while significantly improving efficiency. Together, our work advocates for a reconsideration of the appropriate use of extended thinking processes and provides a solid starting point for building hybrid thinking systems.
- Asia > Thailand > Bangkok > Bangkok (0.04)
- Europe > Slovenia > Drava > Municipality of Benedikt > Benedikt (0.04)
- Asia > Middle East > Jordan (0.04)
- Asia > Middle East > Iraq > Basra Governorate > Basra (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
HFT: Half Fine-Tuning for Large Language Models
Hui, Tingfeng, Zhang, Zhenyu, Wang, Shuohuan, Xu, Weiran, Sun, Yu, Wu, Hua
Large language models (LLMs) with one or more fine-tuning phases have become a necessary step to unlock various capabilities, enabling LLMs to follow natural language instructions or align with human preferences. However, it carries the risk of catastrophic forgetting during sequential training, the parametric knowledge or the ability learned in previous stages may be overwhelmed by incoming training data. In this paper, we find that by regularly resetting partial parameters, LLMs can restore some of the original knowledge. Inspired by this, we introduce Half Fine-Tuning (HFT) for LLMs, as a substitute for full fine-tuning (FFT), to mitigate the forgetting issues, where half of the parameters are selected to learn new tasks while the other half are frozen to remain previous knowledge. We provide a feasibility analysis from the perspective of optimization and interpret the parameter selection operation as a regularization term. Without changing the model architecture, HFT could be seamlessly integrated into existing fine-tuning frameworks. Extensive experiments and analysis on supervised fine-tuning, direct preference optimization, and continual learning consistently demonstrate the effectiveness, robustness, and efficiency of HFT. Compared with FFT, HFT not only significantly alleviates the forgetting problem, but also achieves the best performance in a series of downstream benchmarks, with an approximately 30% reduction in training time.
- North America > United States (0.14)
- Europe > Romania > Sud - Muntenia Development Region > Giurgiu County > Giurgiu (0.04)
- Asia > China > Beijing > Beijing (0.04)
Harnessing Deep Q-Learning for Enhanced Statistical Arbitrage in High-Frequency Trading: A Comprehensive Exploration
The realm of High-Frequency Trading (HFT) is characterized by rapid decision-making processes that capitalize on fleeting market inefficiencies. As the financial markets become increasingly competitive, there is a pressing need for innovative strategies that can adapt and evolve with changing market dynamics. Enter Reinforcement Learning (RL), a branch of machine learning where agents learn by interacting with their environment, making it an intriguing candidate for HFT applications. This paper dives deep into the integration of RL in statistical arbitrage strategies tailored for HFT scenarios. By leveraging the adaptive learning capabilities of RL, we explore its potential to unearth patterns and devise trading strategies that traditional methods might overlook. We delve into the intricate exploration-exploitation trade-offs inherent in RL and how they manifest in the volatile world of HFT. Furthermore, we confront the challenges of applying RL in non-stationary environments, typical of financial markets, and investigate methodologies to mitigate associated risks. Through extensive simulations and backtests, our research reveals that RL not only enhances the adaptability of trading strategies but also shows promise in improving profitability metrics and risk-adjusted returns.
- Banking & Finance > Trading (1.00)
- Energy > Oil & Gas > Upstream (0.34)
Machine Learning for Finance
ML excels in handling large and complex volumes of data, something above the finance industry.ML has found many useful applications in finance, due to the high volume of historical financial data produced in the industry. The platform has come to play an important part in many aspects of the financial environment, from loan approval and credit ratings to asset management and risk assessment. Let's talk about some of its applications: Robo-consultants are a popular application in finance for machine learning. Robo-consultants are an online program providing automated financial advice and support. We provide portfolio management services that automatically create and manage a client's investment portfolio using algorithms and statistics.
Learning Distributed Representations from Reviews for Collaborative Filtering
Almahairi, Amjad, Kastner, Kyle, Cho, Kyunghyun, Courville, Aaron
Recent work has shown that collaborative filter-based recommender systems can be improved by incorporating side information, such as natural language reviews, as a way of regularizing the derived product representations. Motivated by the success of this approach, we introduce two different models of reviews and study their effect on collaborative filtering performance. While the previous state-of-the-art approach is based on a latent Dirichlet allocation (LDA) model of reviews, the models we explore are neural network based: a bag-of-words product-of-experts model and a recurrent neural network. We demonstrate that the increased flexibility offered by the product-of-experts model allowed it to achieve state-of-the-art performance on the Amazon review dataset, outperforming the LDA-based approach. However, interestingly, the greater modeling power offered by the recurrent neural network appears to undermine the model's ability to act as a regularizer of the product representations.
- Europe > Austria > Vienna (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- (5 more...)
- Leisure & Entertainment (0.68)
- Media > Film (0.46)
AI and Finance: No Room for Philosophy
The Cambridge Handbook of Artificial Intelligence (AI) defines AI as a cross-disciplinary approach to understanding, modelling and creating intelligence of various forms. According to Konstantine Arkoudas and Selmer Bringsjord, it is a field devoted to building machines capable of displaying behaviours deemed intelligent, at least in well-controlled environments. A machine that possess intelligence similar with or superior to that of a human poses numerous ethical and legal issues. For example, Nick Bostrom in Superintelligence: Paths, Dangers, Strategies wonders how AI would see the human values and purpose. He suggests that the idea of a machine (with its essence being algorithms) is incompatible with the biological nature of our feelings that set the base for our moral values.
- Banking & Finance > Trading (0.98)
- Law (0.70)
What Have Manchester United, HFT And Deep Learning Got In Common?
Gaurav Chakravorty, co-founder of AI investment advisors qplum, likes to use sporting analogies to illustrate changing trends within finance. The way high frequency trading (HFT) seemed to work like magic in the old days reminds him of Manchester United under Sir Alex Ferguson. Between 1993 and 2013 Manchester United won the English Premier League 13 times, an incredible record. The truth was Ferguson used a machinery that other clubs had not yet happened upon. He would scout clubs in Europe for talented youngsters and be willing to pay top dollar for young stars without a proven track record at a big club.
- Leisure & Entertainment > Sports > Soccer (1.00)
- Banking & Finance > Trading (1.00)
Epitomic Image Super-Resolution
Yang, Yingzhen (University of Illinois at Urbana-Champaign) | Wang, Zhangyang (University of Illinois at Urbana-Champaign) | Wang, Zhaowen (Adobe Research) | Chang, Shiyu (University of Illinois at Urbana-Champaign) | Liu, Ding (University of Illinois at Urbana-Champaign) | Shi, Honghui (University of Illinois at Urbana-Champaign) | Huang, Thomas S. (University of Illinois at Urbana-Champaign)
We propose Epitomic Image Super-Resolution (ESR) to enhance the current internal SR methods that exploit the self-similarities in the input. Instead of local nearest neighbor patch matching used in most existing internal SR methods, ESR employs epitomic patch matching that features robustness to noise, and both local and non-local patch matching. Extensive objective and subjective evaluation demonstrate the effectiveness and advantage of ESR on various images.
- North America > United States > Illinois > Champaign County > Urbana (0.05)
- North America > United States > California > Santa Clara County > San Jose (0.05)
Former nuclear physicist Henri Waelbroeck explains how machine learning mitigates high frequency trading
Henri Waelbroeck seems to fit the popular image of the scientist transplanted into the world of high finance and hedge fund trading, the sort of stereotype found in books like "The Fear Index" by Robert Harris. Waelbroeck, director of research at machine learning-enhanced trade execution system Portware, was previously a professor at the Institute of Nuclear Sciences at the National University of Mexico (UNAM). His areas of expertise include: complex systems science, quantum gravity theories, genetic algorithms, artificial neural networks, chaos theory. The impression Waelbroeck conveys is one of precision. He explains that algorithms have grown in complexity since being introduced to the world of trading around 2000. This has made it increasingly difficult for traders to understand each vendor's full algorithm platform and how to optimally select an algorithm for each particular trade that comes in from a portfolio manager.