Goto

Collaborating Authors

 Jain, Shubham


Inherent Challenges of Post-Hoc Membership Inference for Large Language Models

arXiv.org Artificial Intelligence

Large Language Models (LLMs) are often trained on vast amounts of undisclosed data, motivating the development of post-hoc Membership Inference Attacks (MIAs) to gain insight into their training data composition. However, in this paper, we identify inherent challenges in post-hoc MIA evaluation due to potential distribution shifts between collected member and non-member datasets. Using a simple bag-of-words classifier, we demonstrate that datasets used in recent post-hoc MIAs suffer from significant distribution shifts, in some cases achieving near-perfect distinction between members and non-members. This implies that previously reported high MIA performance may be largely attributable to these shifts rather than model memorization. We confirm that randomized, controlled setups eliminate such shifts and thus enable the development and fair evaluation of new MIAs. However, we note that such randomized setups are rarely available for the latest LLMs, making post-hoc data collection still required to infer membership for real-world LLMs. As a potential solution, we propose a Regression Discontinuity Design (RDD) approach for post-hoc data collection, which substantially mitigates distribution shifts. Evaluating various MIA methods on this RDD setup yields performance barely above random guessing, in stark contrast to previously reported results. Overall, our findings highlight the challenges in accurately measuring LLM memorization and the need for careful experimental design in (post-hoc) membership inference tasks.


A Lightweight Measure of Classification Difficulty from Application Dataset Characteristics

arXiv.org Artificial Intelligence

Despite accuracy and computation benchmarks being widely available to help choose among neural network models, these are usually trained on datasets with many classes, and do not give a precise idea of performance for applications of few (< 10) classes. The conventional procedure to predict performance is to train and test repeatedly on the different models and dataset variations of interest. However, this is computationally expensive. We propose an efficient classification difficulty measure that is calculated from the number of classes and intra- and inter-class similarity metrics of the dataset. After a single stage of training and testing per model family, relative performance for different datasets and models of the same family can be predicted by comparing difficulty measures - without further training and testing. We show how this measure can help a practitioner select a computationally efficient model for a small dataset 6 to 29x faster than through repeated training and testing. We give an example of use of the measure for an industrial application in which options are identified to select a model 42% smaller than the baseline YOLOv5-nano model, and if class merging from 3 to 2 classes meets requirements, 85% smaller.


Feudal Networks for Visual Navigation

arXiv.org Artificial Intelligence

Visual navigation follows the intuition that humans can navigate without detailed maps. A common approach is interactive exploration while building a topological graph with images at nodes that can be used for planning. Recent variations learn from passive videos and can navigate using complex social and semantic cues. However, a significant number of training videos are needed, large graphs are utilized, and scenes are not unseen since odometry is utilized. We introduce a new approach to visual navigation using feudal learning, which employs a hierarchical structure consisting of a worker agent, a mid-level manager, and a high-level manager. Key to the feudal learning paradigm, agents at each level see a different aspect of the task and operate at different spatial and temporal scales. Two unique modules are developed in this framework. For the high-level manager, we learn a memory proxy map in a self supervised manner to record prior observations in a learned latent space and avoid the use of graphs and odometry. For the mid-level manager, we develop a waypoint network that outputs intermediate subgoals imitating human waypoint selection during local navigation. This waypoint network is pre-trained using a new, small set of teleoperation videos that we make publicly available, with training environments different from testing environments. The resulting feudal navigation network achieves near SOTA performance, while providing a novel no-RL, no-graph, no-odometry, no-metric map approach to the image goal navigation task.


Has Your Pretrained Model Improved? A Multi-head Posterior Based Approach

arXiv.org Artificial Intelligence

The emergence of pre-trained models has significantly impacted Natural Language Processing (NLP) and Computer Vision to relational datasets. Traditionally, these models are assessed through fine-tuned downstream tasks. However, this raises the question of how to evaluate these models more efficiently and effectively. In this study, we explore a novel approach where we leverage the metafeatures associated with each entity as a source of worldly knowledge and employ entity representations from the models. We propose using the consistency between these representations and the meta-features as a metric for evaluating pre-trained models. Our method's effectiveness is demonstrated across various domains, including models with relational datasets, large language models, and image models. Pre-training on large models is becoming increasingly common in various machine learning applications, thanks to the growing amount of user-generated content. This is evident in areas such as Natural Language Processing (NLP) with models like GPT (Generative Pretrained Transformer), and in the vision-language domain with models like CLIP. Typically, the effectiveness of these models is evaluated using downstream tasks. However, these can be relatively costly if all tasks need to be performed.


Did the Neurons Read your Book? Document-level Membership Inference for Large Language Models

arXiv.org Artificial Intelligence

With large language models (LLMs) poised to become embedded in our daily lives, questions are starting to be raised about the dataset(s) they learned from. These questions range from potential bias or misinformation LLMs could retain from their training data to questions of copyright and fair use of human-generated text. However, while these questions emerge, developers of the recent state-of-the-art LLMs become increasingly reluctant to disclose details on their training corpus. We here introduce the task of document-level membership inference for real-world LLMs, i.e. inferring whether the LLM has seen a given document during training or not. First, we propose a procedure for the development and evaluation of document-level membership inference for LLMs by leveraging commonly used data sources for training and the model release date. We then propose a practical, black-box method to predict document-level membership and instantiate it on OpenLLaMA-7B with both books and academic papers. We show our methodology to perform very well, reaching an impressive AUC of 0.856 for books and 0.678 for papers. We then show our approach to outperform the sentence-level membership inference attacks used in the privacy literature for the document-level membership task. We finally evaluate whether smaller models might be less sensitive to document-level inference and show OpenLLaMA-3B to be approximately as sensitive as OpenLLaMA-7B to our approach. Taken together, our results show that accurate document-level membership can be inferred for LLMs, increasing the transparency of technology poised to change our lives.


FATA-Trans: Field And Time-Aware Transformer for Sequential Tabular Data

arXiv.org Artificial Intelligence

Sequential tabular data is one of the most commonly used data types in real-world applications. Different from conventional tabular data, where rows in a table are independent, sequential tabular data contains rich contextual and sequential information, where some fields are dynamically changing over time and others are static. Existing transformer-based approaches analyzing sequential tabular data overlook the differences between dynamic and static fields by replicating and filling static fields into each transformer, and ignore temporal information between rows, which leads to three major disadvantages: (1) computational overhead, (2) artificially simplified data for masked language modeling pre-training task that may yield less meaningful representations, and (3) disregarding the temporal behavioral patterns implied by time intervals. In this work, we propose FATA-Trans, a model with two field transformers for modeling sequential tabular data, where each processes static and dynamic field information separately. FATA-Trans is field- and time-aware for sequential tabular data. The field-type embedding in the method enables FATA-Trans to capture differences between static and dynamic fields. The time-aware position embedding exploits both order and time interval information between rows, which helps the model detect underlying temporal behavior in a sequence. Our experiments on three benchmark datasets demonstrate that the learned representations from FATA-Trans consistently outperform state-of-the-art solutions in the downstream tasks. We also present visualization studies to highlight the insights captured by the learned representations, enhancing our understanding of the underlying data. Our codes are available at https://github.com/zdy93/FATA-Trans.


PDT: Pretrained Dual Transformers for Time-aware Bipartite Graphs

arXiv.org Artificial Intelligence

Pre-training on large models is prevalent and emerging Fundamentally, a common goal of data mining applications with the ever-growing user-generated content in many using user-content interactions is to understand machine learning application categories. It has been user's behaviors [17] and content's properties. Researchers recognized that learning contextual knowledge from attempt multiple ways to model such behaviors: the datasets depicting user-content interaction plays The time-related nature of the interactions is a fit for a vital role in downstream tasks. Despite several sequential models, such as recurrent neural networks studies attempting to learn contextual knowledge via pretraining (RNN), and the interactions and relations can be modeled methods, finding an optimal training objective as graph neural networks (GNN). Conventionally, and strategy for this type of task remains a challenging the training objective is to minimize the loss of a specific problem. In this work, we contend that there are two task such that one model is tailored to a particular distinct aspects of contextual knowledge, namely the application (e.g., recommendation). This approach is user-side and the content-side, for datasets where usercontent simple and effective for every data mining application.


ADOPT: A system for Alerting Drivers to Occluded Pedestrian Traffic

arXiv.org Artificial Intelligence

Recent statistics reveal an alarming increase in accidents involving pedestrians (especially children) crossing the street. A common philosophy of existing pedestrian detection approaches is that this task should be undertaken by the moving cars themselves. In sharp departure from this philosophy, we propose to enlist the help of cars parked along the sidewalk to detect and protect crossing pedestrians. In support of this goal, we propose ADOPT: a system for Alerting Drivers to Occluded Pedestrian Traffic. ADOPT lays the theoretical foundations of a system that uses parked cars to: (1) detect the presence of a group of crossing pedestrians - a crossing cohort; (2) predict the time the last member of the cohort takes to clear the street; (3) send alert messages to those approaching cars that may reach the crossing area while pedestrians are still in the street; and, (4) show how approaching cars can adjust their speed, given several simultaneous crossing locations. Importantly, in ADOPT all communications occur over very short distances and at very low power. Our extensive simulations using SUMO-generated pedestrian and car traffic have shown the effectiveness of ADOPT in detecting and protecting crossing pedestrians.