Goto

Collaborating Authors

 Wang, Liyang


Research on Optimizing Real-Time Data Processing in High-Frequency Trading Algorithms using Machine Learning

arXiv.org Artificial Intelligence

High-frequency trading (HFT) represents a pivotal and intensely competitive domain within the financial markets. The velocity and accuracy of data processing exert a direct influence on profitability, underscoring the significance of this field. The objective of this work is to optimise the real-time processing of data in high-frequency trading algorithms. The dynamic feature selection mechanism is responsible for monitoring and analysing market data in real time through clustering and feature weight analysis, with the objective of automatically selecting the most relevant features. This process employs an adaptive feature extraction method, which enables the system to respond and adjust its feature set in a timely manner when the data input changes, thus ensuring the efficient utilisation of data. The lightweight neural networks are designed in a modular fashion, comprising fast convolutional layers and pruning techniques that facilitate the expeditious completion of data processing and output prediction. In contrast to conventional deep learning models, the neural network architecture has been specifically designed to minimise the number of parameters and computational complexity, thereby markedly reducing the inference time. The experimental results demonstrate that the model is capable of maintaining consistent performance in the context of varying market conditions, thereby illustrating its advantages in terms of processing speed and revenue enhancement.


Research on Dynamic Data Flow Anomaly Detection based on Machine Learning

arXiv.org Artificial Intelligence

The sophistication and diversity of contemporary cyberattacks have rendered the use of proxies, gateways, firewalls, and encrypted tunnels as a standalone defensive strategy inadequate. Consequently, the proactive identification of data anomalies has emerged as a prominent area of research within the field of data security. The majority of extant studies concentrate on sample equilibrium data, with the consequence that the detection effect is not optimal in the context of unbalanced data. In this study, the unsupervised learning method is employed to identify anomalies in dynamic data flows. Initially, multi-dimensional features are extracted from real-time data, and a clustering algorithm is utilised to analyse the patterns of the data. This enables the potential outliers to be automatically identified. By clustering similar data, the model is able to detect data behaviour that deviates significantly from normal traffic without the need for labelled data. The results of the experiments demonstrate that the proposed method exhibits high accuracy in the detection of anomalies across a range of scenarios. Notably, it demonstrates robust and adaptable performance, particularly in the context of unbalanced data.


Application of Natural Language Processing in Financial Risk Detection

arXiv.org Artificial Intelligence

This paper explores the application of Natural Language Processing (NLP) in financial risk detection. By constructing an NLP-based financial risk detection model, this study aims to identify and predict potential risks in financial documents and communications. First, the fundamental concepts of NLP and its theoretical foundation, including text mining methods, NLP model design principles, and machine learning algorithms, are introduced. Second, the process of text data preprocessing and feature extraction is described. Finally, the effectiveness and predictive performance of the model are validated through empirical research. The results show that the NLP-based financial risk detection model performs excellently in risk identification and prediction, providing effective risk management tools for financial institutions. This study offers valuable references for the field of financial risk management, utilizing advanced NLP techniques to improve the accuracy and efficiency of financial risk detection.


Research on Edge Detection of LiDAR Images Based on Artificial Intelligence Technology

arXiv.org Artificial Intelligence

LiDAR works by emitting laser pulses and measuring their reflection times to accurately obtain threedimensional spatial information, thus generating high-resolution point cloud data and images. However, the application of LiDAR images faces numerous challenges, particularly in edge detection, where traditional methods often fail to meet practical needs due to insufficient detection accuracy and high computational complexity.Edge detection, as a crucial step in image processing, directly impacts subsequent tasks such as image segmentation, object recognition, and scene understanding[1]. Accurate edge detection can improve target recognition accuracy, optimize navigation path planning, and enhance environmental perception reliability. Therefore, studying an efficient and accurate LiDAR image edge detection method has significant theoretical value and application prospects.Existing edge detection methods, such as the Canny and Sobel algorithms, perform well on conventional images but often struggle with the unique noise characteristics and data structure of LiDAR images. With the rapid advancement of artificial intelligence technology, deep learning has achieved remarkable results in image processing. However, applying deep learning to LiDAR image edge detection still faces challenges such as complex data preprocessing, high difficulty in model training, and significant computational resource demands. Hence, there is an urgent need for an innovative AI-based edge detection method to address these challenges. This study aims to explore and develop an AI-based edge detection method for LiDAR images. The main research contents include: 1. Reviewing the current state of LiDAR technology and its application in edge detection.


Research on Credit Risk Early Warning Model of Commercial Banks Based on Neural Network Algorithm

arXiv.org Artificial Intelligence

In the realm of globalized financial markets, commercial banks are confronted with an escalating magnitude of credit risk, thereby imposing heightened requisites upon the security of bank assets and financial stability. This study harnesses advanced neural network techniques, notably the Backpropagation (BP) neural network, to pioneer a novel model for preempting credit risk in commercial banks. The discourse initially scrutinizes conventional financial risk preemptive models, such as ARMA, ARCH, and Logistic regression models, critically analyzing their real-world applications. Subsequently, the exposition elaborates on the construction process of the BP neural network model, encompassing network architecture design, activation function selection, parameter initialization, and objective function construction. Through comparative analysis, the superiority of neural network models in preempting credit risk in commercial banks is elucidated. The experimental segment selects specific bank data, validating the model's predictive accuracy and practicality. Research findings evince that this model efficaciously enhances the foresight and precision of credit risk management.


Optimization of Worker Scheduling at Logistics Depots Using Genetic Algorithms and Simulated Annealing

arXiv.org Artificial Intelligence

The efficient scheduling of permanent and temporary workers is crucial for Improving the efficiency of sortation center management optimizing the efficiency of the logistics depot while has a direct impact on the fulfillment efficiency and minimizing labor usage. The study begins by establishing operational costs of the entire logistics network. Staff a 0-1 integer linear programming model, with decision management in sortation centers is a key challenge. Staffing needs to be adjusted according to the forecasted shipment variables determining the scheduling of permanent and volume to ensure a sufficient workforce to handle the flow of temporary workers for each time slot on a given day. The goods during peak hours while avoiding the wastage of excess objective function aims to minimize person-days, while manpower during low-demand times. Staff scheduling based constraints ensure fulfillment of hourly labor on effective solution algorithms becomes one of the key requirements, limit workers to one time slot per day, cap strategies to improve the efficiency of the sorting center. By consecutive working days for permanent workers, and reasonably allocating regular and temporary workers, the maintain non-negativity and integer constraints. The sorting speed and accuracy can be improved, thus reducing the model is then solved using genetic algorithms and overall logistics cost and improving customer satisfaction.


Research on Splicing Image Detection Algorithms Based on Natural Image Statistical Characteristics

arXiv.org Artificial Intelligence

Abstract:With the development and widespread application of digital image processing technology, image splicing has become a common method of image manipulation, raising numerous security and legal issues. This paper introduces a new splicing image detection algorithm based on the statistical characteristics of natural images, aimed at improving the accuracy and efficiency of splicing image detection. By analyzing the limitations of traditional methods, we have developed a detection framework that integrates advanced statistical analysis techniques and machine learning methods. The algorithm has been validated using multiple public datasets, showing high accuracy in detecting spliced edges and locating tampered areas, as well as good robustness. Additionally, we explore the potential applications and challenges faced by the algorithm in real-world scenarios. This research not only provides an effective technological means for the field of image tampering detection but also offers new ideas and methods for future related research.


Research on Detection of Floating Objects in River and Lake Based on AI Intelligent Image Recognition

arXiv.org Artificial Intelligence

With the rapid advancement of artificial intelligence technology, AI-enabled image recognition has emerged as a potent tool for addressing challenges in traditional environmental monitoring. This study focuses on the detection of floating objects in river and lake environments, exploring an innovative approach based on deep learning. By intricately analyzing the technical pathways for detecting static and dynamic features and considering the characteristics of river and lake debris, a comprehensive image acquisition and processing workflow has been developed. The study highlights the application and performance comparison of three mainstream deep learning models -SSD, Faster-RCNN, and YOLOv5- in debris identification. Additionally, a detection system for floating objects has been designed and implemented, encompassing both hardware platform construction and software framework development. Through rigorous experimental validation, the proposed system has demonstrated its ability to significantly enhance the accuracy and efficiency of debris detection, thus offering a new technological avenue for water quality monitoring in rivers and lakes