In this paper, we study the problem of answering visual analogy questions. These questions take the form of image A is to image B as image C is to what. Answering these questions entails discovering the mapping from image A to image B and then extending the mapping to image C and searching for the image D such that the relation from A to B holds for C to D. We pose this problem as learning an embedding that encourages pairs of analogous images with similar transformations to be close together using convolutional neural networks with a quadruple Siamese architecture. We introduce a dataset of visual analogy questions in natural images, and show first results of its kind on solving analogy questions on natural images.
Last week, we held the first Engadget Experience in LA, where Your Hands Are Feet was one of five immersive art projects to debut. Experiences like, for instance, what it's like to shave a giant's hairy pink leg in the desert. In our documentary, creators Sarah Rothberg and Amelia Winger-Bearskin explain their working partnership, visual style and the inspiration behind their psychedelic worlds.