No RL, No Simulation: Learning to Navigate without Navigating Meera Hahn
–Neural Information Processing Systems
We use the each datasets given train/val/test splits. GNN is composed of two GA T layers trained with dropout of .6. The node features output by the second GA T layer are pairwise concatenated with the ResNet18 feature of the goal image. The concatenated features are fed into a 2-layer MLP (512 to 256 to 1) with ReLU activation, and the output is fed through a sigmoid. The network is trained with mean squared error (MSE) loss against the true distance to goal.
Neural Information Processing Systems
Aug-18-2025, 02:06:26 GMT
- Technology: