torchvision
- Oceania (0.04)
- Europe (0.04)
- Asia (0.04)
- (3 more...)
- Oceania (0.04)
- Europe (0.04)
- Asia (0.04)
- (3 more...)
376c6b9ff3bedbbea56751a84fffc10c-Supplemental.pdf
Does Knowledge Distillation Really Work? Here we briefly describe key implementation details to reproduce our experiments. Data augmentation details are given in A.1, followed by architecture details in A.2, and finally training details are provided in A.3. The reader is encouraged to consult the included code for closer inspection. A.1 Data augmentation procedures Some of the data augmentation procedures we consider attempt to generate data that is close to the train data distribution (standard augmentations, GAN, mixup). Others (random noise, out-of-domain data) produce data for distillation that the teacher would never encounter during normal supervised training.
PALATE: Peculiar Application of the Law of Total Expectation to Enhance the Evaluation of Deep Generative Models
Dziarmaga, Tadeusz, Kądziołka, Marcin, Kasymov, Artur, Mazur, Marcin
Deep generative models (DGMs) have caused a paradigm shift in the field of machine learning, yielding noteworthy advancements in domains such as image synthesis, natural language processing, and other related areas. However, a comprehensive evaluation of these models that accounts for the trichotomy between fidelity, diversity, and novelty in generated samples remains a formidable challenge. A recently introduced solution that has emerged as a promising approach in this regard is the Feature Likelihood Divergence (FLD), a method that offers a theoretically motivated practical tool, yet also exhibits some computational challenges. In this paper, we propose PALATE, a novel enhancement to the evaluation of DGMs that addresses limitations of existing metrics. Our approach is based on a peculiar application of the law of total expectation to random variables representing accessible real data. When combined with the MMD baseline metric and DINOv2 feature extractor, PALATE offers a holistic evaluation framework that matches or surpasses state-of-the-art solutions while providing superior computational efficiency and scalability to large-scale datasets. Through a series of experiments, we demonstrate the effectiveness of the PALATE enhancement, contributing a computationally efficient, holistic evaluation approach that advances the field of DGMs assessment, especially in detecting sample memorization and evaluating generalization capabilities.
- Europe > Poland > Lesser Poland Province > Kraków (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- Research Report > Promising Solution (0.68)
- Research Report > New Finding (0.46)
CaBRNet, an open-source library for developing and evaluating Case-Based Reasoning Models
Xu-Darme, Romain, Varasse, Aymeric, Grastien, Alban, Girard, Julien, Chihani, Zakaria
As a reflection of the social and ethical concerns related to the increasing use of AI-based systems in modern society, the field of explainable AI (XAI) has gained tremendous momentum in recent years. XAI mainly consists of two complementary avenues of research that aim at shedding some light into the inner-workings of complex ML models. On the one hand, post-hoc explanation methods apply to existing models that have often been trained with the sole purpose of accomplishing a given task as efficiently as possible (e.g., accuracy in a classification task). On the other hand, self-explainable models are designed and trained to produce their own explanations along with their decision. The appeal of selfexplainable models resides in the fact that rather than using an approximation (i.e., a post-hoc explanation method) to understand a complex model, it is better to directly enforce a simpler (and more understandable) decision-making process during the design and training of the ML model, provided that such a model would exhibit an acceptable level of performance.
GitHub - pytorch/vision: Datasets, Transforms and Models specific to Computer Vision
The torchvision package consists of popular datasets, model architectures, and common image transformations for computer vision. We recommend Anaconda as Python package management system. Please refer to pytorch.org for the detail of PyTorch (torch) installation. The following is the corresponding torchvision versions and supported Python versions. We don't officially support building from source using pip, but if you do, you'll need to use the --no-build-isolation flag.
Discrepancies among Pre-trained Deep Neural Networks: A New Threat to Model Zoo Reliability
Montes, Diego, Peerapatanapokin, Pongpatapee, Schultz, Jeff, Gun, Chengjun, Jiang, Wenxin, Davis, James C.
Training deep neural networks (DNNs) takes signifcant time and resources. A practice for expedited deployment is to use pre-trained deep neural networks (PTNNs), often from model zoos -- collections of PTNNs; yet, the reliability of model zoos remains unexamined. In the absence of an industry standard for the implementation and performance of PTNNs, engineers cannot confidently incorporate them into production systems. As a first step, discovering potential discrepancies between PTNNs across model zoos would reveal a threat to model zoo reliability. Prior works indicated existing variances in deep learning systems in terms of accuracy. However, broader measures of reliability for PTNNs from model zoos are unexplored. This work measures notable discrepancies between accuracy, latency, and architecture of 36 PTNNs across four model zoos. Among the top 10 discrepancies, we find differences of 1.23%-2.62% in accuracy and 9%-131% in latency. We also fnd mismatches in architecture for well-known DNN architectures (e.g., ResNet and AlexNet). Our findings call for future works on empirical validation, automated tools for measurement, and best practices for implementation.
- Asia > Singapore > Central Region > Singapore (0.05)
- North America > United States > New York > New York County > New York City (0.04)
Comparing Different Automatic Image Augmentation Methods in PyTorch
One of the best ways to reduce overfitting is to collect more (good-quality) data. However, collecting more data is not always feasible or can be very expensive. A related technique is data augmentation. Data augmentation involves generating new data records or features from existing data, expanding the dataset without collecting more data. It helps improve model generalization by creating variations of original input data and making it harder to memorize irrelevant information from training examples or features. Data augmentation is common for image and text data, but also exists for tabular data.