Uncertainty-Driven Radar-Inertial Fusion for Instantaneous 3D Ego-Velocity Estimation
Rai, Prashant Kumar, Kowsari, Elham, Strokina, Nataliya, Ghabcheloo, Reza
–arXiv.org Artificial Intelligence
F 2 = ComplexBN ( ComplexConv (F 1)) (3) Equation 3 further processes the features F 1 from the previous layer through another complex convolution layer, and the output is normalized using complex batch normalization. This step enhances the stability and efficiency of the network by standardizing the features before they are further processed. F 3 = SpatialAttention (ChannelAttention (F 2)) (4) In Equation 4, an attention mechanism (Spatial + Channel) is applied to F 2, which allows the network to focus on the most informative features by weighting them based on their significance in the ego-velocity estimation. We use spatial attention on the feature maps (Doppler, Channels) and channel attention on the samples dimension. Moreover, each complex-valued residual block in the network incorporates a skip connection. This means that the output of each block is concatenated with its input before being passed to the subsequent blocks. This architecture choice helps to mitigate the vanishing gradient problem during training by allowing gradients to flow directly through the network layers, thus enhancing the learning and convergence of the network [34]. The network is designed to effectively handle the complex-valued input from radar scans, ensuring robust feature extraction for subsequent processing stages.
arXiv.org Artificial Intelligence
Jun-18-2025
- Country:
- Europe > Finland
- North America > United States
- Colorado > Adams County > Aurora (0.04)
- Genre:
- Research Report (1.00)
- Technology: