TouchandGo: Learningfrom Human-CollectedVisionandTouch SupplementaryMaterial
–Neural Information Processing Systems
We've provided a webpage for our dataset, which contains a link to the dataset. Our dataset is currently available through our webpage (and directly via this link). We use a learning rate of 0.01 for ResNet-18 and0.1forResNet-50. This loss is motivated by recent contrastive learning to maximize the probability for the neural network to select the corresponding patch in both the original imagexI and the generated image ˆxI. For reference, we also show the image that corresponds to the tactile example at rightmost (not used by the model).
Neural Information Processing Systems
Feb-8-2026, 07:06:13 GMT