Goto

Collaborating Authors

 laser pulse


Shaping Laser Pulses with Reinforcement Learning

Capuano, Francesco, Peceli, Davorin, Tiboni, Gabriele

arXiv.org Artificial Intelligence

High Power Laser (HPL) systems operate in the femtosecond regime--one of the shortest timescales achievable in experimental physics. HPL systems are instrumental in high-energy physics, leveraging ultra-short impulse durations to yield extremely high intensities, which are essential for both practical applications and theoretical advancements in light-matter interactions. Traditionally, the parameters regulating HPL optical performance are tuned manually by human experts, or optimized by using black-box methods that can be computationally demanding. Critically, black box methods rely on stationarity assumptions overlooking complex dynamics in high-energy physics and day-to-day changes in real-world experimental settings, and thus need to be often restarted. Deep Reinforcement Learning (DRL) offers a promising alternative by enabling sequential decision making in non-static settings. This work investigates the safe application of DRL to HPL systems, and extends the current research by (1) learning a control policy directly from images and (2) addressing the need for generalization across diverse dynamics. We evaluate our method across various configurations and observe that DRL effectively enables cross-domain adaptability, coping with dynamics' fluctuations while achieving 90% of the target intensity in test environments.


Research on Edge Detection of LiDAR Images Based on Artificial Intelligence Technology

Yang, Haowei, Wang, Liyang, Zhang, Jingyu, Cheng, Yu, Xiang, Ao

arXiv.org Artificial Intelligence

LiDAR works by emitting laser pulses and measuring their reflection times to accurately obtain threedimensional spatial information, thus generating high-resolution point cloud data and images. However, the application of LiDAR images faces numerous challenges, particularly in edge detection, where traditional methods often fail to meet practical needs due to insufficient detection accuracy and high computational complexity.Edge detection, as a crucial step in image processing, directly impacts subsequent tasks such as image segmentation, object recognition, and scene understanding[1]. Accurate edge detection can improve target recognition accuracy, optimize navigation path planning, and enhance environmental perception reliability. Therefore, studying an efficient and accurate LiDAR image edge detection method has significant theoretical value and application prospects.Existing edge detection methods, such as the Canny and Sobel algorithms, perform well on conventional images but often struggle with the unique noise characteristics and data structure of LiDAR images. With the rapid advancement of artificial intelligence technology, deep learning has achieved remarkable results in image processing. However, applying deep learning to LiDAR image edge detection still faces challenges such as complex data preprocessing, high difficulty in model training, and significant computational resource demands. Hence, there is an urgent need for an innovative AI-based edge detection method to address these challenges. This study aims to explore and develop an AI-based edge detection method for LiDAR images. The main research contents include: 1. Reviewing the current state of LiDAR technology and its application in edge detection.


A Fingertip Sensor and Algorithms for Pre-touch Distance Ranging and Material Detection in Robotic Grasping

Fang, Cheng, Wang, Di, Guo, Fengzhi, Zou, Jun, Song, Dezhen

arXiv.org Artificial Intelligence

To enhance robotic grasping capabilities, we are developing new contactless fingertip sensors to measure distance in close proximity and simultaneously detect the type of material and the interior structure. These sensors are referred to as pre-touch dual-modal and dual-mechanism (PDM$^2$) sensors, and they operate using both pulse-echo ultrasound (US) and optoacoustic (OA) modalities. We present the design of a PDM$^2$ sensor that utilizes a pulsed laser beam and a customized ultrasound transceiver with a wide acoustic bandwidth for ranging and sensing. Both US and OA signals are collected simultaneously, triggered by the same laser pulse. To validate our design, we have fabricated a prototype of the PDM$^2$ sensor and integrated it into an object scanning system. We have also developed algorithms to enable the sensor, including time-of-flight (ToF) auto estimation, ranging rectification, sensor and system calibration, distance ranging, material/structure detection, and object contour detection and reconstruction. The experimental results demonstrate that the new PDM$^2$ sensor and its algorithms effectively enable the object scanning system to achieve satisfactory ranging and contour reconstruction performances, along with satisfying material/structure detection capabilities. In conclusion, the PDM$^2$ sensor offers a practical and powerful solution to improve grasping of unknown objects with the robotic gripper by providing advanced perception capabilities.


TempoRL: laser pulse temporal shape optimization with Deep Reinforcement Learning

Capuano, Francesco, Peceli, Davorin, Tiboni, Gabriele, Camoriano, Raffaello, Rus, Bedřich

arXiv.org Artificial Intelligence

High Power Laser's (HPL) optimal performance is essential for the success of a wide variety of experimental tasks related to light-matter interactions. Traditionally, HPL parameters are optimised in an automated fashion relying on black-box numerical methods. However, these can be demanding in terms of computational resources and usually disregard transient and complex dynamics. Model-free Deep Reinforcement Learning (DRL) offers a promising alternative framework for optimising HPL performance since it allows to tune the control parameters as a function of system states subject to nonlinear temporal dynamics without requiring an explicit dynamics model of those. Furthermore, DRL aims to find an optimal control policy rather than a static parameter configuration, particularly suitable for dynamic processes involving sequential decision-making. This is particularly relevant as laser systems are typically characterised by dynamic rather than static traits. Hence the need for a strategy to choose the control applied based on the current context instead of one single optimal control configuration. This paper investigates the potential of DRL in improving the efficiency and safety of HPL control systems. We apply this technique to optimise the temporal profile of laser pulses in the L1 pump laser hosted at the ELI Beamlines facility. We show how to adapt DRL to the setting of spectral phase control by solely tuning dispersion coefficients of the spectral phase and reaching pulses similar to transform limited with full-width at half-maximum (FWHM) of ca1.6 ps.


Automated control and optimisation of laser driven ion acceleration

Loughran, B., Streeter, M. J. V., Ahmed, H., Astbury, S., Balcazar, M., Borghesi, M., Bourgeois, N., Curry, C. B., Dann, S. J. D., DiIorio, S., Dover, N. P., Dzelzanis, T., Ettlinger, O. C., Gauthier, M., Giuffrida, L., Glenn, G. D., Glenzer, S. H., Green, J. S., Gray, R. J., Hicks, G. S., Hyland, C., Istokskaia, V., King, M., Margarone, D., McCusker, O., McKenna, P., Najmudin, Z., Parisuaña, C., Parsons, P., Spindloe, C., Symes, D. R., Thomas, A. G. R., Treffert, F., Xu, N., Palmer, C. A. J.

arXiv.org Artificial Intelligence

The interaction of relativistically intense lasers with opaque targets represents a highly non-linear, multi-dimensional parameter space. This limits the utility of sequential 1D scanning of experimental parameters for the optimisation of secondary radiation, although to-date this has been the accepted methodology due to low data acquisition rates. High repetition-rate (HRR) lasers augmented by machine learning present a valuable opportunity for efficient source optimisation. Here, an automated, HRR-compatible system produced high fidelity parameter scans, revealing the influence of laser intensity on target pre-heating and proton generation. A closed-loop Bayesian optimisation of maximum proton energy, through control of the laser wavefront and target position, produced proton beams with equivalent maximum energy to manually-optimized laser pulses but using only 60% of the laser energy. This demonstration of automated optimisation of laser-driven proton beams is a crucial step towards deeper physical insight and the construction of future radiation sources.


Next-Gen Volvo To Have LiDAR And AI-Based Computer Fitted

#artificialintelligence

Starting as a small local enterprise in 1927, Volvo has grown into a major player in the commercial transport and infrastructure solutions market. In May last year, Volvo announced choosing Luminar to supply lidar sensors for its next-generation XC90. The SUV will come with state-of-the-art sensors, including LiDAR technology and an autonomous driving computer powered by the NVIDIA DRIVE Orin system-on-a-chip. The suite of advanced safety features will be a standard on the successor to Volvo Cars' XC90, unveiling in 2022. The next generation of pure electric Volvo Cars will have industry leading safety technology including LiDAR and an AI-driven super computers as standard to help save lives.


Watch a beam of light bounce off mirrors in ultra-slow motion

New Scientist - News

An ultra-fast camera has captured a video of light as it bounces between mirrors. Although light isn't normally visible in flight, some photons from a laser pulse will scatter off particles in the air and can be picked up by a camera. Using these photons to recreate the pulse's trajectory is difficult, because by the time they reach the camera, the pulse has moved to a new location. Edoardo Charbon at the Swiss Federal Institute of Technology in Lausanne and his colleagues used a camera with a shutter speed of about a trillionth of a second to take pictures and video of a laser beam following a 3D path. Knowing exactly how long the pulse took to get to the camera, along with the pulse's trajectory in a flat plane, allowed a machine learning algorithm to reconstruct the entire 3D path of the burst of light.


Watch a beam of light bounce off mirrors in ultra-slow motion

New Scientist

An ultra-fast camera has captured a video of light as it bounces between mirrors. Although light isn't normally visible in flight, some photons from a laser pulse will scatter off particles in the air and can be picked up by a camera. Using these photons to recreate the pulse's trajectory is difficult, because by the time they reach the camera, the pulse has moved to a new location. Edoardo Charbon at the Swiss Federal Institute of Technology in Lausanne and his colleagues used a camera with a shutter speed of about a trillionth of a second to take pictures and video of a laser beam following a 3D path. Knowing exactly how long the pulse took to get to the camera, along with the pulse's trajectory in a flat plane, allowed a machine learning algorithm to reconstruct the entire 3D path of the burst of light.


Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving

Cao, Yulong, Xiao, Chaowei, Cyr, Benjamin, Zhou, Yimeng, Park, Won, Rampazzi, Sara, Chen, Qi Alfred, Fu, Kevin, Mao, Z. Morley

arXiv.org Machine Learning

In Autonomous Vehicles (AVs), one fundamental pillar is perception, which leverages sensors like cameras and LiDARs (Light Detection and Ranging) to understand the driving environment. Due to its direct impact on road safety, multiple prior efforts have been made to study its the security of perception systems. In contrast to prior work that concentrates on camera-based perception, in this work we perform the first security study of LiDAR-based perception in AV settings, which is highly important but unexplored. We consider LiDAR spoofing attacks as the threat model and set the attack goal as spoofing obstacles close to the front of a victim AV. We find that blindly applying LiDAR spoofing is insufficient to achieve this goal due to the machine learning-based object detection process. Thus, we then explore the possibility of strategically controlling the spoofed attack to fool the machine learning model. We formulate this task as an optimization problem and design modeling methods for the input perturbation function and the objective function. We also identify the inherent limitations of directly solving the problem using optimization and design an algorithm that combines optimization and global sampling, which improves the attack success rates to around 75%. As a case study to understand the attack impact at the AV driving decision level, we construct and evaluate two attack scenarios that may damage road safety and mobility. We also discuss defense directions at the AV system, sensor, and machine learning model levels.


Big data, small lab – Physics World

#artificialintelligence

The Large Hadron Collider at CERN is one of the world's largest scientific instruments. It captures 5 trillion bits of data every second, and the Geneva-based lab employs a dedicated group of experts to manage the flow. In contrast, the instrument shown here – known as a time-stretch quantitative phase imaging microscope – fits on a bench top, and is managed by a team of one. However, it is also capable of capturing an immense amount of data: 0.8 trillion bits per second. These two examples illustrate just how ubiquitous "big data" has become in physics.