Goto

Collaborating Authors

Large Scale Adversarial Representation Learning

Neural Information Processing Systems

Adversarially trained generative models (GANs) have recently achieved compelling image synthesis results. But despite early successes in using GANs for unsupervised representation learning, they have since been superseded by approaches based on self-supervision. In this work we show that progress in image generation quality translates to substantially improved representation learning performance. Our approach, BigBiGAN, builds upon the state-of-the-art BigGAN model, extending it to representation learning by adding an encoder and modifying the discriminator. We extensively evaluate the representation learning and generation capabilities of these BigBiGAN models, demonstrating that these generation-based models achieve the state of the art in unsupervised representation learning on ImageNet, as well as compelling results in unconditional image generation.


Algorithms and Limits for Compact Plan Representations

Journal of Artificial Intelligence Research

Compact representations of objects is a common concept in computer science. Automated planning can be viewed as a case of this concept: a planning instance is a compact implicit representation of a graph and the problem is to find a path (a plan) in this graph. While the graphs themselves are represented compactly as planning instances, the paths are usually represented explicitly as sequences of actions. Some cases are known where the plans always have compact representations, for example, using macros. We show that these results do not extend to the general case, by proving a number of bounds for compact representations of plans under various criteria, like efficient sequential or random access of actions. In addition to this, we show that our results have consequences for what can be gained from reformulating planning into some other problem. As a contrast to this we also prove a number of positive results, demonstrating restricted cases where plans do have useful compact representations, as well as proving that macro plans have favourable access properties. Our results are finally discussed in relation to other relevant contexts.


Tari

AAAI Conferences

There are multiple and even interacting dimensions along which shape representation schemes may be compared and contrasted. In this paper, we focus on the following ques- tion. Are the building blocks in a compositional model lo- calized in space (e.g. as in part based representations) or are they holistic simplifications (e.g. as in spectral representa- tions)? Existing shape representation schemes prefer one or the other. We propose a new shape representation paradigm that encompasses both choices.


Backstrom

AAAI Conferences

Most planning formalisms allow instances with shortest plans of exponential length. While such instances are problematic, they are usually unavoidable and can occur in practice. There are several known cases of restricted planning problems where plans can be exponential but always have a compact (ie.


Knowledge Fusion via Embeddings from Text, Knowledge Graphs, and Images

arXiv.org Machine Learning

We present a baseline approach for cross-modal knowledge fusion. Different basic fusion methods are evaluated on existing embedding approaches to show the potential of joining knowledge about certain concepts across modalities in a fused concept representation.