frame-based camera
Temporal and Rotational Calibration for Event-Centric Multi-Sensor Systems
Mai, Jiayao, Lu, Xiuyuan, Dai, Kuan, Shen, Shaojie, Zhou, Yi
Event cameras generate asynchronous signals in response to pixel-level brightness changes, offering a sensing paradigm with theoretically microsecond-scale latency that can significantly enhance the performance of multi-sensor systems. Extrinsic calibration is a critical prerequisite for effective sensor fusion; however, the configuration that involves event cameras remains an understudied topic. In this paper, we propose a motion-based temporal and rotational calibration framework tailored for event-centric multi-sensor systems, eliminating the need for dedicated calibration targets. Our method uses as input the rotational motion estimates obtained from event cameras and other heterogeneous sensors, respectively. Different from conventional approaches that rely on event-to-frame conversion, our method efficiently estimates angular velocity from normal flow observations, which are derived from the spatio-temporal profile of event data. The overall calibration pipeline adopts a two-step approach: it first initializes the temporal offset and rotational extrinsics by exploiting kinematic correlations in the spirit of Canonical Correlation Analysis (CCA), and then refines both temporal and rotational parameters through a joint non-linear optimization using a continuous-time parametrization in SO(3). Extensive evaluations on both publicly available and self-collected datasets validate that the proposed method achieves calibration accuracy comparable to target-based methods, while exhibiting superior stability over purely CCA-based methods, and highlighting its precision, robustness and flexibility. To facilitate future research, our implementation will be made open-source. Code: https://github.com/NAIL-HNU/EvMultiCalib.
- North America > United States (0.04)
- Asia > China > Hong Kong (0.04)
- Europe > Netherlands > South Holland > Delft (0.04)
- Asia > China > Hunan Province > Changsha (0.04)
EF-Calib: Spatiotemporal Calibration of Event- and Frame-Based Cameras Using Continuous-Time Trajectories
Wang, Shaoan, Xin, Zhanhua, Hu, Yaoqing, Li, Dongyue, Zhu, Mingzhu, Yu, Junzhi
Event camera, a bio-inspired asynchronous triggered camera, offers promising prospects for fusion with frame-based cameras owing to its low latency and high dynamic range. However, calibrating stereo vision systems that incorporate both event and frame-based cameras remains a significant challenge. In this letter, we present EF-Calib, a spatiotemporal calibration framework for event- and frame-based cameras using continuous-time trajectories. A novel calibration pattern applicable to both camera types and the corresponding event recognition algorithm is proposed. Leveraging the asynchronous nature of events, a derivable piece-wise B-spline to represent camera pose continuously is introduced, enabling calibration for intrinsic parameters, extrinsic parameters, and time offset, with analytical Jacobians provided. Various experiments are carried out to evaluate the calibration performance of EF-Calib, including calibration experiments for intrinsic parameters, extrinsic parameters, and time offset. Experimental results show that EF-Calib achieves the most accurate intrinsic parameters compared to current SOTA, the close accuracy of the extrinsic parameters compared to the frame-based results, and accurate time offset estimation. EF-Calib provides a convenient and accurate toolbox for calibrating the system that fuses events and frames. The code of this paper will also be open-sourced at: https://github.com/wsakobe/EF-Calib.
- Asia > China > Fujian Province > Fuzhou (0.04)
- Asia > China > Beijing > Beijing (0.04)
- North America > United States > Washington > King County > Seattle (0.04)
- (6 more...)
A multi-modal table tennis robot system
Ziegler, Andreas, Gossard, Thomas, Vetter, Karl, Tebbe, Jonas, Zell, Andreas
Table tennis is a fast-paced and exhilarating sport that demands agility, precision, and lightning-fast reflexes. It is a sport enjoyed by millions of enthusiasts worldwide, ranging from casual players to professional athletes. In recent years, the fusion of technology and sports has led to the development of various training aids and innovations aimed at enhancing the skills of players and fostering their competitive edge. Among these technological advancements, table tennis robots have also emerged. While not yet able to compete with professional players, table tennis robots are an interesting research environment to bring perception and control algorithms towards their limits. Thus, it is not surprising, that more and more research groups use table tennis robots as a test bed for their algorithms [1][2][3][4].
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.14)
- North America > United States (0.04)
Simultaneous Synchronization and Calibration for Wide-baseline Stereo Event Cameras
Xing, Wanli, Lin, Shijie, Zheng, Guangze, Du, Yanjun, Pan, Jia
Event-based cameras are increasingly utilized in various applications, owing to their high temporal resolution and low power consumption. However, a fundamental challenge arises when deploying multiple such cameras: they operate on independent time systems, leading to temporal misalignment. This misalignment can significantly degrade performance in downstream applications. Traditional solutions, which often rely on hardware-based synchronization, face limitations in compatibility and are impractical for long-distance setups. To address these challenges, we propose a novel algorithm that exploits the motion of objects in the shared field of view to achieve millisecond-level synchronization among multiple event-based cameras. Our method also concurrently estimates extrinsic parameters. We validate our approach in both simulated and real-world indoor/outdoor scenarios, demonstrating successful synchronization and accurate extrinsic parameters estimation.
eWand: A calibration framework for wide baseline frame-based and event-based camera systems
Gossard, Thomas, Ziegler, Andreas, Kolmar, Levin, Tebbe, Jonas, Zell, Andreas
Accurate calibration is crucial for using multiple cameras to triangulate the position of objects precisely. However, it is also a time-consuming process that needs to be repeated for every displacement of the cameras. The standard approach is to use a printed pattern with known geometry to estimate the intrinsic and extrinsic parameters of the cameras. The same idea can be applied to event-based cameras, though it requires extra work. By using frame reconstruction from events, a printed pattern can be detected. A blinking pattern can also be displayed on a screen. Then, the pattern can be directly detected from the events. Such calibration methods can provide accurate intrinsic calibration for both frame- and event-based cameras. However, using 2D patterns has several limitations for multi-camera extrinsic calibration, with cameras possessing highly different points of view and a wide baseline. The 2D pattern can only be detected from one direction and needs to be of significant size to compensate for its distance to the camera. This makes the extrinsic calibration time-consuming and cumbersome. To overcome these limitations, we propose eWand, a new method that uses blinking LEDs inside opaque spheres instead of a printed or displayed pattern. Our method provides a faster, easier-to-use extrinsic calibration approach that maintains high accuracy for both event- and frame-based cameras.
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.04)
- North America > United States (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Target-free Extrinsic Calibration of Event-LiDAR Dyad using Edge Correspondences
Xing, Wanli, Lin, Shijie, Yang, Lei, Pan, Jia
Calibrating the extrinsic parameters of sensory devices is crucial for fusing multi-modal data. Recently, event cameras have emerged as a promising type of neuromorphic sensors, with many potential applications in fields such as mobile robotics and autonomous driving. When combined with LiDAR, they can provide more comprehensive information about the surrounding environment. Nonetheless, due to the distinctive representation of event cameras compared to traditional frame-based cameras, calibrating them with LiDAR presents a significant challenge. In this paper, we propose a novel method to calibrate the extrinsic parameters between a dyad of an event camera and a LiDAR without the need for a calibration board or other equipment. Our approach takes advantage of the fact that when an event camera is in motion, changes in reflectivity and geometric edges in the environment trigger numerous events, which can also be captured by LiDAR. Our proposed method leverages the edges extracted from events and point clouds and correlates them to estimate extrinsic parameters. Experimental results demonstrate that our proposed method is highly robust and effective in various scenes.
- Asia > China > Hong Kong (0.04)
- North America > United States > Washington > King County > Seattle (0.04)
- North America > United States > California > Alameda County > Berkeley (0.04)
- Asia > China > Guangdong Province > Shenzhen (0.04)
- Research Report > Promising Solution (0.48)
- Research Report > New Finding (0.34)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Robots (1.00)
Real-time event simulation with frame-based cameras
Ziegler, Andreas, Teigland, Daniel, Tebbe, Jonas, Gossard, Thomas, Zell, Andreas
Event cameras are becoming increasingly popular in robotics and computer vision due to their beneficial properties, e.g., high temporal resolution, high bandwidth, almost no motion blur, and low power consumption. However, these cameras remain expensive and scarce in the market, making them inaccessible to the majority. Using event simulators minimizes the need for real event cameras to develop novel algorithms. However, due to the computational complexity of the simulation, the event streams of existing simulators cannot be generated in real-time but rather have to be pre-calculated from existing video sequences or pre-rendered and then simulated from a virtual 3D scene. Although these offline generated event streams can be used as training data for learning tasks, all response time dependent applications cannot benefit from these simulators yet, as they still require an actual event camera. This work proposes simulation methods that improve the performance of event simulation by two orders of magnitude (making them real-time capable) while remaining competitive in the quality assessment.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.04)
- Europe > Switzerland > Zürich > Zürich (0.04)
Moving Object Detection for Event-based Vision using k-means Clustering
Mondal, Anindya, Das, Mayukhmali
Event-based cameras are bio-inspired sensors that mimic the working of the human eye (Gallego et al. [2020]). While frame-based cameras capture images at a definite frame rate which is determined by an external clock, each pixel in event-based cameras memorizes the log intensity each time an event is sent and simultaneously monitors for a sufficient change in magnitude from this memorized threshold value (Gallego et al. [2020]). The event is recorded by the camera and is transmitted by the sensor in the form of its location {x, y}, its time of occurrence (timestamp) t and its polarity p (taking a binary value 1 or 1, representing whether the pixel is brighter or darker) (Chen et al. [2020]). The working of an event-based camera is shown in Figure 1. The sensors used in event-based cameras are data-driven, for their output depends on the amount of motion or brightness change in the scene (Gallego et al. [2020]). Higher is the motion, higher is the number of events generated. The events are recorded in microsecond resolution and are transmitted in sub-millisecond latency, making these sensors react quickly to visual stimuli (Gallego et al. [2020]). Thus, while frame-based cameras capture the absolute brightness of a scene, event-based cameras capture the per-pixel brightness asynchronously, making traditional computer vision algorithms inapplicable to be implemented for processing the event data. Detection of moving objects is an important task in automation, where a computer differentiates in between a moving object and a stationary one.
- Asia > India > West Bengal > Kolkata (0.04)
- North America > Canada > Ontario > Essex County > Windsor (0.04)
- Asia > Taiwan (0.04)