Goto

Collaborating Authors

 yang


Good Semi-supervised Learning That Requires a Bad GAN

Neural Information Processing Systems

Semi-supervised learning methods based on generative adversarial networks (GANs) obtained strong empirical results, but it is not clear 1) how the discriminator benefits from joint training with a generator, and 2) why good semi-supervised classification performance and a good generator cannot be obtained at the same time. Theoretically we show that given the discriminator objective, good semi-supervised learning indeed requires a bad generator, and propose the definition of a preferred generator. Empirically, we derive a novel formulation based on our analysis that substantially improves over feature matching GANs, obtaining state-of-the-art results on multiple benchmark datasets.


Differentiable Learning of Logical Rules for Knowledge Base Reasoning

Neural Information Processing Systems

We study the problem of learning probabilistic first-order logical rules for knowledge base reasoning. This learning problem is difficult because it requires learning the parameters in a continuous space as well as the structure in a discrete space. We propose a framework, Neural Logic Programming, that combines the parameter and structure learning of first-order logical rules in an end-to-end differentiable model. This approach is inspired by a recently-developed differentiable logic called TensorLog [5], where inference tasks can be compiled into sequences of differentiable operations. We design a neural controller system that learns to compose these operations. Empirically, our method outperforms prior work on multiple knowledge base benchmark datasets, including Freebase and WikiMovies.



Imagine Losing Your Job to the Mere Possibility of AI

The Atlantic - Technology

The technology may not be ready to replace workers, but that isn't stopping execs from pushing forward anyway. Late last month, at an event in Washington, D.C., Andrew Yang delivered a bleak message. "I have bad news, America," he told the crowd. The Fuckening is the name that Yang, a former presidential candidate, has given to AI's disembowelment of the workforce. As he sees it, millions of knowledge workers will soon lose their job, personal-bankruptcy rates will spike, and entire downtowns will turn vacant as offices hollow out.


This AI Tool Will Tell You to Stop Slacking Off

WIRED

Fomi watches you work, then scolds you when your attention wanders. It's helpful, but there are privacy issues to consider. I've tested a lot of software tools over the years designed to block distractions and keep you focused. None of them work perfectly, mostly because of context. Reddit, for example, is something I should generally avoid during the workday, so I tend to block it--this is a good decision for me overall.



Non-Local Recurrent Network for Image Restoration

Ding Liu, Bihan Wen, Yuchen Fan, Chen Change Loy, Thomas S. Huang

Neural Information Processing Systems

Many classic methods have shown non-local self-similarity in natural images to be an effective prior for image restoration. However, it remains unclear and challenging to make use of this intrinsic property via deep networks.




Transfer Learning via Minimizing the Performance Gap Between Domains

Boyu Wang, Jorge Mendez, Mingbo Cai, Eric Eaton

Neural Information Processing Systems

To address this issue, we present the first analysis for instance weighting transfer learning that considers the presence of labeled target examples. The contribution of our work is two-fold.1. We address the question ofhow to measure the divergence between two domains given label informationforthetargetdomain.