cutoff frequency
VeMo: A Lightweight Data-Driven Approach to Model Vehicle Dynamics
Oddo, Girolamo, Nuca, Roberto, Parsani, Matteo
Abstract--Developing a dynamic model for a high-performance vehicle is a complex problem that requires extensive structural information about the system under analysis. This information is often unavailable to those who did not design the vehicle and represents a typical issue in autonomous driving applications, which are frequently developed on top of existing vehicles; therefore, vehicle models are developed under conditions of information scarcity. This paper proposes a lightweight encoder-decoder model based on Gate Recurrent Unit layers to correlate the vehicle's future state with its past states, measured onboard, and control actions the driver performs. The results demonstrate that the model achieves a maximum mean relative error below 2.6% in extreme dynamic conditions. It also shows good robustness when subject to noisy input data across the interested frequency components. Furthermore, being entirely data-driven and free from physical constraints, the model exhibits physical consistency in the output signals, such as longitudinal and lateral accelerations, yaw rate, and the vehicle's longitudinal velocity. N the automotive sector developing a representative vehicle dynamics model is a complex and multifaceted challenge [1]-[3]. Numerous nonlinear factors influence vehicle dynamics, including tire characteristics, suspension geometry, aerodynamics, drivetrain effects, and external environmental factors, such as road surface grip conditions and climatic effects (e.g., wind). Accurately capturing these effects in a computational model requires high-fidelity multibody simulation software and a profound understanding of the vehicle system.
- Automobiles & Trucks (1.00)
- Transportation > Ground > Road (0.48)
- Leisure & Entertainment > Sports > Motorsports (0.46)
- Information Technology > Robotics & Automation (0.34)
- Information Technology > Data Science (1.00)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.93)
Super-Resolution Off the Grid
Qingqing Huang, Sham M. Kakade
Super-resolution is the problem of recovering a superposition of point sources using bandlimited measurements, which may be corrupted with noise. This signal processing problem arises in numerous imaging problems, ranging from astronomy to biology to spectroscopy, where it is common to take (coarse) Fourier measurements of an object. Of particular interest is in obtaining estimation procedures which are robust to noise, with the following desirable statistical and computational properties: we seek to use coarse Fourier measurements (bounded by some cutoff frequency); we hope to take a (quantifiably) small number of measurements; we desire our algorithm to run quickly. Suppose we have k point sources in d dimensions, where the points are separated by at least from each other (in Euclidean distance). This work provides an algorithm with the following favorable guarantees: The algorithm uses Fourier measurements, whose frequencies are bounded by O (1 /) (up to log factors).
Motion ReTouch: Motion Modification Using Four-Channel Bilateral Control
Inami, Koki, Sakaino, Sho, Tsuji, Toshiaki
--Recent research has demonstrated the usefulness of imitation learning in autonomous robot operation. In particular, teaching using four-channel bilateral control, which can obtain position and force information, has been proven effective. However, control performance that can easily execute high-speed, complex tasks in one go has not yet been achieved. We propose a method called Motion ReT ouch, which retroactively modifies motion data obtained using four-channel bilateral control. The proposed method enables modification of not only position but also force information. This was achieved by the combination of multilateral control and motion-copying system. The proposed method was verified in experiments with a real robot, and the success rate of the test tube transfer task was improved, demonstrating the possibility of modification force information. I. INTRODUCTION In recent years, imitation learning [1] [2] [3], a learning-based approach that enables robots to imitate human behavior, has been attracting attention.
- Asia > Japan > Honshū > Kantō > Ibaraki Prefecture > Tsukuba (0.05)
- Asia > Japan > Honshū > Kantō > Saitama Prefecture > Saitama (0.05)
- North America > United States (0.04)
Reviews: Temporal FiLM: Capturing Long-Range Sequence Dependencies with Feature-Wise Modulations.
Two or three relevant citations: Transformer models should probably be mentioned in the section on "models designed specifically for use on sequences", since they are competing heavily with the referenced baselines on NLP tasks especially. I believe your numbers on the Yelp dataset compare very favorably to the "sentiment neuron" work from Radford et al https://arxiv.org/abs/1704.01444 - that could be a nice addition and add further external context to your results. Some questions about the architecture, particularly the importance of the "additive skip connection" from input to output - how crucial is this connection, since it somewhat allows the network to bypass the TFiLM layers entirely? Does using a stacked skip (with free trainable parameters) still work, or does it hurt network training / break it completely? What is the SNR of the cubic interpolation used as input for the audio experiments?
ADV2E: Bridging the Gap Between Analogue Circuit and Discrete Frames in the Video-to-Events Simulator
Jiang, Xiao, Zhou, Fei, Lin, Jiongzhi
Event cameras operate fundamentally differently from traditional Active Pixel Sensor (APS) cameras, offering significant advantages. Recent research has developed simulators to convert video frames into events, addressing the shortage of real event datasets. Current simulators primarily focus on the logical behavior of event cameras. However, the fundamental analogue properties of pixel circuits are seldom considered in simulator design. The gap between analogue pixel circuit and discrete video frames causes the degeneration of synthetic events, particularly in high-contrast scenes. In this paper, we propose a novel method of generating reliable event data based on a detailed analysis of the pixel circuitry in event cameras. We incorporate the analogue properties of event camera pixel circuits into the simulator design: (1) analogue filtering of signals from light intensity to events, and (2) a cutoff frequency that is independent of video frame rate. Experimental results on two relevant tasks, including semantic segmentation and image reconstruction, validate the reliability of simulated event data, even in high-contrast scenes. This demonstrates that deep neural networks exhibit strong generalization from simulated to real event data, confirming that the synthetic events generated by the proposed method are both realistic and well-suited for effective training.
- Europe > Switzerland (0.04)
- Asia > China > Yunnan Province > Kunming (0.04)
- Asia > China > Guangdong Province > Shenzhen (0.04)
Super-Resolution Off the Grid
Super-resolution is the problem of recovering a superposition of point sources using bandlimited measurements, which may be corrupted with noise. This signal processing problem arises in numerous imaging problems, ranging from astronomy to biology to spectroscopy, where it is common to take (coarse) Fourier measurements of an object. Of particular interest is in obtaining estimation procedures which are robust to noise, with the following desirable statistical and computational properties: we seek to use coarse Fourier measurements (bounded by some \emph{cutoff frequency}); we hope to take a (quantifiably) small number of measurements; we desire our algorithm to run quickly. Suppose we have k point sources in d dimensions, where the points are separated by at least \Delta from each other (in Euclidean distance). This work provides an algorithm with the following favorable guarantees:1.
Super-Resolution Off the Grid
Super-resolution is the problem of recovering a superposition of point sources using bandlimited measurements, which may be corrupted with noise. This signal processing problem arises in numerous imaging problems, ranging from astronomy to biology to spectroscopy, where it is common to take (coarse) Fourier measurements of an object. Of particular interest is in obtaining estimation procedures which are robust to noise, with the following desirable statistical and computational properties: we seek to use coarse Fourier measurements (bounded by some cutoff frequency); we hope to take a (quantifiably) small number of measurements; we desire our algorithm to run quickly. Suppose we have k point sources in d dimensions, where the points are separated by at least from each other (in Euclidean distance). This work provides an algorithm with the following favorable guarantees: The algorithm uses Fourier measurements, whose frequencies are bounded by O(1/) (up to log factors).
FITS: Modeling Time Series with $10k$ Parameters
Xu, Zhijian, Zeng, Ailing, Xu, Qiang
In this paper, we introduce FITS, a lightweight yet powerful model for time series analysis. Unlike existing models that directly process raw time-domain data, FITS operates on the principle that time series can be manipulated through interpolation in the complex frequency domain, achieving performance comparable to state-ofthe-art models for time series forecasting and anomaly detection tasks. Notably, FITS accomplishes this with a svelte profile of just about 10k parameters, making it ideally suited for edge devices and paving the way for a wide range of applications. The code is available: https://github.com/VEWOXIC/FITS. Time series analysis plays a pivotal role in a myriad of sectors, from healthcare appliances to smart factories. Within these domains, the reliance is often on edge devices like smart sensors, driven by MCUs with limited computational and memory resources. Time series data, marked by its inherent complexity and dynamism, typically presents information that is both sparse and scattered within the time domain. To effectively harness this data, recent research has given rise to sophisticated models and methodologies (Zhou et al., 2021; Liu et al., 2022a; Zeng et al., 2023; Nie et al., 2023; Zhang et al., 2022). Yet, the computational and memory costs of these models makes them unsuitable for resource-constrained edge devices. On the other hand, the frequency domain representation of time series data promises a more compact and efficient portrayal of inherent patterns. While existing research has indeed tapped into the frequency domain for time series analysis -- FEDformer (Zhou et al., 2022a) enriches its features using spectral data, and TimesNet (Wu et al., 2023) harnesses high-amplitude frequencies for feature extraction via CNNs -- a comprehensive utilization of the frequency domain's compactness remains largely unexplored. Specifically, the ability of the frequency domain to employ complex numbers in capturing both amplitude and phase information is not utilized, resulting in the continued reliance on compute-intensive models for temporal feature extraction. In this study, we reinterpret time series analysis tasks, such as forecasting and reconstruction, as interpolation exercises within the complex frequency domain.
- North America > United States > New York > New York County > New York City (0.04)
- Asia > China > Hong Kong (0.04)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
Deep Learning for Time Series Classification of Parkinson's Disease Eye Tracking Data
Uribarri, Gonzalo, von Huth, Simon Ekman, Waldthaler, Josefine, Svenningsson, Per, Fransén, Erik
Eye-tracking is an accessible and non-invasive technology that provides information about a subject's motor and cognitive abilities. As such, it has proven to be a valuable resource in the study of neurodegenerative diseases such as Parkinson's disease. Saccade experiments, in particular, have proven useful in the diagnosis and staging of Parkinson's disease. However, to date, no single eye-movement biomarker has been found to conclusively differentiate patients from healthy controls. In the present work, we investigate the use of state-of-the-art deep learning algorithms to perform Parkinson's disease classification using eye-tracking data from saccade experiments. In contrast to previous work, instead of using hand-crafted features from the saccades, we use raw $\sim1.5\,s$ long fixation intervals recorded during the preparatory phase before each trial. Using these short time series as input we implement two different classification models, InceptionTime and ROCKET. We find that the models are able to learn the classification task and generalize to unseen subjects. InceptionTime achieves $78\%$ accuracy, while ROCKET achieves $88\%$ accuracy. We also employ a novel method for pruning the ROCKET model to improve interpretability and generalizability, achieving an accuracy of $96\%$. Our results suggest that fixation data has low inter-subject variability and potentially carries useful information about brain cognitive and motor conditions, making it suitable for use with machine learning in the discovery of disease-relevant biomarkers.
- Europe > Sweden > Stockholm > Stockholm (0.05)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.93)
- Health & Medicine > Therapeutic Area > Neurology > Parkinson's Disease (1.00)
- Health & Medicine > Therapeutic Area > Musculoskeletal (1.00)
Towards Building More Robust Models with Frequency Bias
Bu, Qingwen, Huang, Dong, Cui, Heming
The vulnerability of deep neural networks to adversarial samples has been a major impediment to their broad applications, despite their success in various fields. Recently, some works suggested that adversarially-trained models emphasize the importance of low-frequency information to achieve higher robustness. While several attempts have been made to leverage this frequency characteristic, they have all faced the issue that applying low-pass filters directly to input images leads to irreversible loss of discriminative information and poor generalizability to datasets with distinct frequency features. This paper presents a plug-and-play module called the Frequency Preference Control Module that adaptively reconfigures the low- and high-frequency components of intermediate feature representations, providing better utilization of frequency in robust learning. Empirical studies show that our proposed module can be easily incorporated into any adversarial training framework, further improving model robustness across different architectures and datasets. Additionally, experiments were conducted to examine how the frequency bias of robust models impacts the adversarial training process and its final robustness, revealing interesting insights.