Any good driver who is about to change lanes knows it's important to glance over their shoulder to ensure there are no vehicles in their blind spot -- and such real-time awareness of nearby vehicles is no less critical for autonomous driving systems. That's why self-driving technologies rely on a robust perception backbone that is expected to identify all relevant agents in the environment, including accurate "pose and shape" estimation of other vehicles sharing the road. Autonomous vehicle systems have evolved their own digital approaches to shoulder-checking, leveraging data from one of their most common sensing modalities, LiDAR. Now, a team of researchers from Pittsburgh-based autonomous vehicle technology company Argo AI, Microsoft, and CMU have introduced a novel network architecture for jointly estimating the shape and pose of vehicles even from partial LiDAR observations. Existing SOTA methods for pose and shape prediction typically first estimate the pose of an unaligned partial point cloud then apply that pose to the partial input before estimating the shape. However, this encoder-pose decoder and encoder-shape decoder architecture can result in shape estimation suffering from any errors in the pose estimation network's output and, eventually, poor completion performance.
Sep-16-2020, 23:35:39 GMT