lassie
LASSIE: LearningArticulatedShapesfromSparse ImageEnsemblevia3DPartDiscovery
Therefore,techniquestoreconstruct articulated 3D objects from 2D images are crucial and highly useful. In this work, we propose a practical problem setting to estimate 3D pose and shape of animals given only a few (10-30) in-the-wild images of a particular animal species (say,horse). Contrary toexisting worksthatrelyonpre-defined template shapes, we do not assume any form of 2D or 3D ground-truth annotations, nor do we leverage any multi-view or temporal information. Moreover, each input image ensemble can contain animal instances with varying poses, backgrounds, illuminations, and textures. Our key insight is that 3D parts have much simpler shape compared totheoverall animal and that theyarerobustw.r.t.
LASSIE's robot dog may join astronauts on Mars
Breakthroughs, discoveries, and DIY tips sent every weekday. When humans eventually set foot on Mars, they may have a four-legged companion by their side. But the dog accompanying them won't be a canine at all, but a quadruped robot designed to gather samples and keep astronauts on the Red Planet from twisting an ankle. Built with autonomous capability, it will be capable of operating independently of humans. Put another way, the Mars dog will walk off-leash.
- North America > United States > California (0.16)
- North America > United States > Oregon (0.07)
- North America > United States > Pennsylvania (0.06)
- (4 more...)
Supplementary Material for LASSIE: Learning Articulated Shapes from Sparse Image Ensemble via 3D Part Discovery
In this supplementary document, we present the implementation details, model analyses, and additional results of our method. We also provide a short video to explain our framework with illustrations and visual results. We then collect and cluster the features of salient image patches by thresholding the saliency scores. As shown in Figure 3, the primitive MLP and part MLPs adopt a similar architecture as NeRS, i.e., three fully-connected layers with instance normalization and Leaky ReLU activation for middle layers. We show the architecture diagrams for primitive MLP and part MLPs.
Learning personalized reward functions with Interaction-Grounded Learning (IGL)
Rewards play a crucial role in reinforcement learning (RL). A good choice of reward function motivates an agent to explore and learn which actions are valuable. The feedback that an agent receives via rewards allows them to update their behavior and learn useful policies. However, designing reward functions is complicated and cumbersome, even for domain experts. Automatically inferring a reward function is more desirable for end-users interacting with a system.
- Media (0.96)
- Information Technology > Services (0.48)