Goto

Collaborating Authors

 Inductive Learning



Self-Supervised Multi-Object Tracking with Cross-Input Consistency (Supplementary Material) Favyen Bastani, Songtao He, Sam Madden

Neural Information Processing Systems

In this appendix, we detail occlusion-based hiding, and tracker observes the detections and the other tracker does also include results for five additional experiments: not, the model must make similar tracking decisions when relocalizing across occluded frames as it does when observing 1. Varying Detector Performance (Training): MOTA on detections in each frame. MOT17 when using detectors of varying performance However, in practice, Only-Occlusion yields a model that during self-supervised training of our tracker model. Thus, to make 3. Varying Unlabeled Video Dataset Size: MOTA when this scheme effective, we must prevent the propagation of features self-supervised learning is conducted on video datasets directly from I RNN Hand-off prevents simple memorization 4. Varying Sequence Length: adjusting the length of video by cutting off the propagation of RNN features through sequences that are sampled on each training step. However, a tracker that learns to match tracks to match all tracks to the absent column in that frame, and to detections by comparing only the detection features in consecutive re-localize the tracks after the occlusion. Learning comparing detections solely in a pairwise frame-by-frame to merely compare detection features across consecutive manner is an effective tracking strategy.


Supplementary materials - NeuMiss networks: differentiable programming for supervised learning with missing values

Neural Information Processing Systems

The last equality allows to conclude the proof. Assume that the data are generated via the linear model defined in equation (1) and satisfy Assumption 1. Additionally, assume that either Assumption 2 or Assumption 3 holds. Lemma 1 gives the general expression of the Bayes predictor for any data distribution and missing data mechanism. This concludes the proof according to Lemma 1. Assume that the data are generated via the linear model defined in equation (1) and satisfy Assumption 1 and Assumption 4. Let Here we establish an auxiliary result, controlling the convergence of Neumann iterates to the matrix inverse.






Respond to Reviewer 1 A common bias is that meta-learning should tackle transfer learning or few-shot learning of our paper is to improve the general supervised learning performance via meta-learning

Neural Information Processing Systems

As pointed out by ICLR 2019 AnonReviewer3 of the Table 1: Updated results for regression. MAXL paper, "Moreover, since the method is not a metalearning To facilitate experiments, we resize images to 64 64 resolution. For regression results, we provide results of kNN in Table 1, which are Imagenet. We hope our response can address most of your concerns and sincerely hope you can re-consider your score. Respond to Reviewer 2 In fact, we didn't observe optimization difficulties when training all variables together due Besides, our model is not sensitive to the choice of datasets.


Strongly local p-norm-cut algorithms for semi-supervised learning and local graph clustering

Neural Information Processing Systems

Graph based semi-supervised learning is the problem of learning a labeling function for the graph nodes given a few example nodes, often called seeds, usually under the assumption that the graph's edges indicate similarity of labels. This is closely related to the local graph clustering or community detection problem of finding a cluster or community of nodes around a given seed. For this problem, we propose a novel generalization of random walk, diffusion, or smooth function methods in the literature to a convex p-norm cut function. The need for our p-norm methods is that, in our study of existing methods, we find those principled methods based on eigenvector, spectral, random walk, or linear system often have difficulty capturing the correct boundary of a target label or target cluster. In contrast, 1-norm or maxflow-mincut based methods capture the boundary, but cannot grow from small seed set; hybrid procedures that use both have many hard to set parameters.


Discover and Align Taxonomic Context Priors for Open-world Semi-Supervised Learning

Neural Information Processing Systems

Open-world Semi-Supervised Learning (OSSL) is a realistic and challenging task, aiming to classify unlabeled samples from both seen and novel classes using partially labeled samples from the seen classes. Previous works typically explore the relationship of samples as priors on the pre-defined single-granularity labels to help novel class recognition. In fact, classes follow a taxonomy and samples can be classified at multiple levels of granularity, which contains more underlying relationships for supervision. We thus argue that learning with single-granularity labels results in sub-optimal representation learning and inaccurate pseudo labels, especially with unknown classes. In this paper, we take the initiative to explore and propose a uniformed framework, called Taxonomic context prIors Discovering and Aligning (TIDA), which exploits the relationship of samples under various granularity. It allows us to discover multi-granularity semantic concepts as taxonomic context priors (i.e., sub-class, target-class, and super-class), and then collaboratively leverage them to enhance representation learning and improve the quality of pseudo labels.