unsupervised loss
- North America > United States > California > San Diego County > San Diego (0.04)
- North America > Canada (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
f7ac67a9aa8d255282de7d11391e1b69-AuthorFeedback.pdf
Inthemain6 objective, the program optimizes forΛ based on the supervised loss of the "validation" set. SSL typically uses an'unsupervised loss' to15 leverage unlabeled data. While the model may not generalize if the unsupervised loss is poorly designed, recent16 works [38, 36] empirically validate their proposed loss. Theoretical analysis of SSL has also been provided under17 various assumptions,e.g., [6, A]. Weencourage R1 to study these works which show how unsupervised losses aid18 generalization.
Tailoring: encoding inductive biases by optimizing unsupervised objectives at prediction time
From CNNs to attention mechanisms, encoding inductive biases into neural networks has been a fruitful source of improvement in machine learning. Adding auxiliary losses to the main objective function is a general way of encoding biases that can help networks learn better representations. However, since auxiliary losses are minimized only on training data, they suffer from the same generalization gap as regular task losses. Moreover, by adding a term to the loss function, the model optimizes a different objective than the one we care about. In this work we address both problems: first, we take inspiration from transductive learning and note that after receiving an input but before making a prediction, we can fine-tune our networks on any unsupervised loss. We call this process tailoring, because we customize the model to each input to ensure our prediction satisfies the inductive bias. Second, we formulate meta-tailoring, a nested optimization similar to that in meta-learning, and train our models to perform well on the task objective after adapting them using an unsupervised loss. The advantages of tailoring and meta-tailoring are discussed theoretically and demonstrated empirically on a diverse set of examples.
Rethinking pooling in graph neural networks -- Supplementary material -- A Implementation details
Table S1 reports summary statistics of the datasets used in this paper. Table S1 also shows statistics for PROTEINS, NCI109, DD, and MOLHIV . For all datasets, we employ an initial convolution to extract node embeddings. The Complement model sticks to exactly the same setup. We use mini-batches of size 64 for SMNIST and ZINC.
- North America > United States > California > San Diego County > San Diego (0.04)
- North America > Canada (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Tailoring: encoding inductive biases by optimizing unsupervised objectives at prediction time
From CNNs to attention mechanisms, encoding inductive biases into neural networks has been a fruitful source of improvement in machine learning. Adding auxiliary losses to the main objective function is a general way of encoding biases that can help networks learn better representations. However, since auxiliary losses are minimized only on training data, they suffer from the same generalization gap as regular task losses. Moreover, by adding a term to the loss function, the model optimizes a different objective than the one we care about. In this work we address both problems: first, we take inspiration from transductive learning and note that after receiving an input but before making a prediction, we can fine-tune our networks on any unsupervised loss. We call this process tailoring, because we customize the model to each input to ensure our prediction satisfies the inductive bias.
Tailoring: encoding inductive biases by optimizing unsupervised objectives at prediction time
From CNNs to attention mechanisms, encoding inductive biases into neural networks has been a fruitful source of improvement in machine learning. Adding auxiliary losses to the main objective function is a general way of encoding biases that can help networks learn better representations. However, since auxiliary losses are minimized only on training data, they suffer from the same generalization gap as regular task losses. Moreover, by adding a term to the loss function, the model optimizes a different objective than the one we care about. In this work we address both problems: first, we take inspiration from transductive learning and note that after receiving an input but before making a prediction, we can fine-tune our networks on any unsupervised loss. We call this process tailoring, because we customize the model to each input to ensure our prediction satisfies the inductive bias.
Bilevel Joint Unsupervised and Supervised Training for Automatic Speech Recognition
Cui, Xiaodong, Saif, A F M, Lu, Songtao, Chen, Lisha, Chen, Tianyi, Kingsbury, Brian, Saon, George
In this paper, we propose a bilevel joint unsupervised and supervised training (BL-JUST) framework for automatic speech recognition. Compared to the conventional pre-training and fine-tuning strategy which is a disconnected two-stage process, BL-JUST tries to optimize an acoustic model such that it simultaneously minimizes both the unsupervised and supervised loss functions. Because BL-JUST seeks matched local optima of both loss functions, acoustic representations learned by the acoustic model strike a good balance between being generic and task-specific. We solve the BL-JUST problem using penalty-based bilevel gradient descent and evaluate the trained deep neural network acoustic models on various datasets with a variety of architectures and loss functions. We show that BL-JUST can outperform the widely-used pre-training and fine-tuning strategy and some other popular semi-supervised techniques.
- North America > United States > New York (0.04)
- Africa > Middle East > Tunisia > Ben Arous Governorate > Ben Arous (0.04)