Goto

Collaborating Authors

 deflection angle


Soft Arm-Motor Thrust Characterization for a Pneumatically Actuated Soft Morphing Quadrotor

Sumathy, Vidya, Haluska, Jakub, Nikolokopoulos, George

arXiv.org Artificial Intelligence

In this work, an experimental characterization of the configuration space of a soft, pneumatically actuated morphing quadrotor is presented, with a focus on precise thrust characterization of its flexible arms, considering the effect of downwash. Unlike traditional quadrotors, the soft drone has pneumatically actuated arms, introducing complex, nonlinear interactions between motor thrust and arm deformation, which make precise control challenging. The silicone arms are actuated using differential pressure to achieve flexibility and thus have a variable workspace compared to their fixed counter-parts. The deflection of the soft arms during compression and expansion is controlled throughout the flight. However, in real time, the downwash from the motor attached at the tip of the soft arm generates a significant and random disturbance on the arm. This disturbance affects both the desired deflection of the arm and the overall stability of the system. To address this factor, an experimental characterization of the effect of downwash on the deflection angle of the arm is conducted.

  Country: Europe (0.48)
  Genre: Research Report (0.65)
  Industry: Energy > Oil & Gas (0.37)

ND-SDF: Learning Normal Deflection Fields for High-Fidelity Indoor Reconstruction

Tang, Ziyu, Ye, Weicai, Wang, Yifan, Huang, Di, Bao, Hujun, He, Tong, Zhang, Guofeng

arXiv.org Artificial Intelligence

Neural implicit reconstruction via volume rendering has demonstrated its effectiveness in recovering dense 3D surfaces. However, it is non-trivial to simultaneously recover meticulous geometry and preserve smoothness across regions with differing characteristics. To address this issue, previous methods typically employ geometric priors, which are often constrained by the performance of the prior models. In this paper, we propose ND-SDF, which learns a Normal Ddeflection field to represent the angular deviation between the scene normal and the prior normal. Unlike previous methods that uniformly apply geometric priors on all samples, introducing significant bias in accuracy, our proposed normal deflection field dynamically learns and adapts the utilization of samples based on their specific characteristics, thereby improving both the accuracy and effectiveness of the model. Our method not only obtains smooth weakly textured regions such as walls and floors but also preserves the geometric details of complex structures. In addition, we introduce a novel ray sampling strategy based on the deflection angle to facilitate the unbiased rendering process, which significantly improves the quality and accuracy of intricate surfaces, especially on thin structures. Consistent improvements on various challenging datasets demonstrate the superiority of our method.


Large Language Models for Human-Machine Collaborative Particle Accelerator Tuning through Natural Language

Kaiser, Jan, Eichler, Annika, Lauscher, Anne

arXiv.org Artificial Intelligence

Autonomous tuning of particle accelerators is an active and challenging field of research with the goal of enabling novel accelerator technologies cutting-edge high-impact applications, such as physics discovery, cancer research and material sciences. A key challenge with autonomous accelerator tuning remains that the most capable algorithms require an expert in optimisation, machine learning or a similar field to implement the algorithm for every new tuning task. In this work, we propose the use of large language models (LLMs) to tune particle accelerators. We demonstrate on a proof-of-principle example the ability of LLMs to successfully and autonomously tune a particle accelerator subsystem based on nothing more than a natural language prompt from the operator, and compare the performance of our LLM-based solution to state-of-the-art optimisation algorithms, such as Bayesian optimisation (BO) and reinforcement learning-trained optimisation (RLO). In doing so, we also show how LLMs can perform numerical optimisation of a highly non-linear real-world objective function. Ultimately, this work represents yet another complex task that LLMs are capable of solving and promises to help accelerate the deployment of autonomous tuning algorithms to the day-to-day operations of particle accelerators.


SmartFPS: Neural Network based Wireless-inertial fusion positioning system

Hua, Luchi, Yang, Jun

arXiv.org Artificial Intelligence

The current fusion positioning systems are mainly based on filtering algorithms, such as Kalman filtering or particle filtering. However, the system complexity of practical application scenarios is often very high, such as no ise modeling in pedestrian inertial navigation systems, or environmental noise modeling in fingerprint matching and localization algorithms. To solve this problem, this paper proposes a fusion positioning system based on deep learning and proposes a transf er learning strategy for improving the performance of neural network models for samples with different distributions. The results show that in the whole floor scenario, the average positioning accuracy of the fusion network is 0.506 m . The experiment results of transfer learning show that the estimation accuracy of the inertial navigation positioning step size and rotation angle of different pedestrians can be improved by 53.3% on average, the Bluetooth positioning accuracy of different devices can be improved by 33.4%, and the fusion can be improved by 31.6%. However, the current mature GPS positioning is usually unable to locate effectively indoors due to irregular attenuati o n caused b y the occlusion of GPS signals by clouds, building walls, and ceilings. Because of this, the new technology of indoor positioning system was proposed. Each positioning technique has its advantages as well as its limitations. For example, inertial navigation positioning is prone to accumulative errors d u e to system noise and drift [19, 20]; positioning signals such as Wi - Fi and Bluetooth fluctuate and signal attenuation is difficult to model, so traditional methods such as trilateral positioning [21, 22] are directly used for positioning accuracy. Manuscript received xx, xx; revised x x, xx; accepted xx, xx . Luchi Hua and Jun Yang are with Southeast University, 2 Sipailou, Nanjing 210096, China (e - mail: 1 046902506@qq.com, Jun Yang is corres ponding author. In general, high - precision, and high - stable positioning performance cannot be obtained based on a single positioning system. On the other hand, high - cost positioning systems canno t be used for civilian use. Especially in the pedestrian positioning scenario, factors such as portability and cost need to be considered. Therefore, most positioning solutions obtain multiple sensor data from the user's mobile terminal to co o rdinate posit ioning services, such as the gyroscope and accelerometer of the inertial unit in the smartphone. Wi - Fi and Bluetooth modules, etc. Due to the limitations of various positioning technologies, fusion positioning systems have been widely studied in recent year s, such as Wi - Fi, Bluetooth, Lidar, and inertial navigation fusion [27 - 32]. Simply put, data fusion is the process of combining data from multiple sensors and related information to achieve more specific inferences than can be achieved with a single sensor.


Metasurface-enhanced Light Detection and Ranging Technology

Martins, Renato Juliano, Marinov, Emil, Youssef, M. Aziz Ben, Kyrou, Christina, Joubert, Mathilde, Colmagro, Constance, Gâté, Valentin, Turbil, Colette, Coulon, Pierre-Marie, Turover, Daniel, Khadir, Samira, Giudici, Massimo, Klitis, Charalambos, Sorel, Marc, Genevet, Patrice

arXiv.org Artificial Intelligence

Deploying advanced imaging solutions to robotic and autonomous systems by mimicking human vision requires simultaneous acquisition of multiple fields of views, named the peripheral and fovea regions. Low-resolution peripheral field provides coarse scene exploration to direct the eye to focus to a highly resolved fovea region for sharp imaging. Among 3D computer vision techniques, Light Detection and Ranging (LiDAR) is currently considered at the industrial level for robotic vision. LiDAR is an imaging technique that monitors pulses of light at optical frequencies to sense the space and to recover three-dimensional ranging information. Notwithstanding the efforts on LiDAR integration and optimization, commercially available devices have slow frame rate and low image resolution, notably limited by the performance of mechanical or slow solid-state deflection systems. Metasurfaces (MS) are versatile optical components that can distribute the optical power in desired regions of space. Here, we report on an advanced LiDAR technology that uses ultrafast low FoV deflectors cascaded with large area metasurfaces to achieve large FoV and simultaneous peripheral and central imaging zones. This technology achieves MHz frame rate for 2D imaging, and up to KHz for 3D imaging, with extremely large FoV (up to 150{\deg}deg. on both vertical and horizontal scanning axes). The use of this disruptive LiDAR technology with advanced learning algorithms offers perspectives to improve further the perception capabilities and decision-making process of autonomous vehicles and robotic systems.