Goto

Collaborating Authors

 Xu, Shengzhe


Optimizing Product Provenance Verification using Data Valuation Methods

arXiv.org Artificial Intelligence

Determining and Determining and verifying product provenance remains a critical verifying product provenance is a challenge in global supply chains, challenge in global supply chains, particularly as geopolitical conflicts as geopolitics and the lure of "don't ask, don't tell" with respect to and shifting borders create new incentives for misrepresentation the ecological and social cost creates incentives for misrepresentation of commodities, such as hiding the origin of illegally harvested of commodities, such as hiding the origin of illegally harvested timber or agriculture grown on illegally cleared land. Stable Isotope timber or agriculture grown on illegally cleared land. Ratio Analysis (SIRA), combined with Gaussian process regressionbased Product identification and provenance verification of traded natural isoscapes, has emerged as a powerful tool for geographic resources have emerged as promising research areas, with origin verification. However, the effectiveness of these models is often various combinations of methods used based on the specific natural constrained by data scarcity and suboptimal dataset selection. In resource sector and the level of granularity of species identification this work, we introduce a novel data valuation framework designed and origin-provenance determination. For example, for wood and to enhance the selection and utilization of training data for machine forest products, determining species identification and geographic learning models applied in SIRA. By prioritizing high-informative harvest provenance requires utilizing multiple testing methods and samples, our approach improves model robustness and predictive tools [5, 8, 20].


LLM Augmentations to support Analytical Reasoning over Multiple Documents

arXiv.org Artificial Intelligence

Building on their demonstrated ability to perform a variety of tasks, we investigate the application of large language models (LLMs) to enhance in-depth analytical reasoning within the context of intelligence analysis. Intelligence analysts typically work with massive dossiers to draw connections between seemingly unrelated entities, and uncover adversaries' plans and motives. We explore if and how LLMs can be helpful to analysts for this task and develop an architecture to augment the capabilities of an LLM with a memory module called dynamic evidence trees (DETs) to develop and track multiple investigation threads. Through extensive experiments on multiple datasets, we highlight how LLMs, as-is, are still inadequate to support intelligence analysts and offer recommendations to improve LLMs for such intricate reasoning applications.


Information Guided Regularization for Fine-tuning Language Models

arXiv.org Artificial Intelligence

The pretraining-fine-tuning paradigm has been the de facto strategy for transfer learning in modern language modeling. With the understanding that task adaptation in LMs is often a function of parameters shared across tasks, we argue that a more surgical approach to regularization needs to exist for smoother transfer learning. Towards this end, we investigate how the pretraining loss landscape is affected by these task-sensitive parameters through an information-theoretic lens. We then leverage the findings from our investigations to devise a novel approach to dropout for improved model regularization and better downstream generalization. This approach, named guided dropout, is both task & architecture agnostic and adds no computational overhead to the fine-tuning process. Through empirical evaluations, we showcase that our approach to regularization yields consistently better performance, even in scenarios of data paucity, compared to standardized baselines.


Are LLMs Naturally Good at Synthetic Tabular Data Generation?

arXiv.org Artificial Intelligence

Large language models (LLMs) have demonstrated their prowess in generating synthetic text and images; however, their potential for generating tabular data -- arguably the most common data type in business and scientific applications -- is largely underexplored. This paper demonstrates that LLMs, used as-is, or after traditional fine-tuning, are severely inadequate as synthetic table generators. Due to the autoregressive nature of LLMs, fine-tuning with random order permutation runs counter to the importance of modeling functional dependencies, and renders LLMs unable to model conditional mixtures of distributions (key to capturing real world constraints). We showcase how LLMs can be made to overcome some of these deficiencies by making them permutation-aware.


Large Multi-Modal Models (LMMs) as Universal Foundation Models for AI-Native Wireless Systems

arXiv.org Artificial Intelligence

Large language models (LLMs) and foundation models have been recently touted as a game-changer for 6G systems. However, recent efforts on LLMs for wireless networks are limited to a direct application of existing language models that were designed for natural language processing (NLP) applications. To address this challenge and create wireless-centric foundation models, this paper presents a comprehensive vision on how to design universal foundation models that are tailored towards the deployment of artificial intelligence (AI)-native networks. Diverging from NLP-based foundation models, the proposed framework promotes the design of large multi-modal models (LMMs) fostered by three key capabilities: 1) processing of multi-modal sensing data, 2) grounding of physical symbol representations in real-world wireless systems using causal reasoning and retrieval-augmented generation (RAG), and 3) enabling instructibility from the wireless environment feedback to facilitate dynamic network adaptation thanks to logical and mathematical reasoning facilitated by neuro-symbolic AI. In essence, these properties enable the proposed LMM framework to build universal capabilities that cater to various cross-layer networking tasks and alignment of intents across different domains. Preliminary results from experimental evaluation demonstrate the efficacy of grounding using RAG in LMMs, and showcase the alignment of LMMs with wireless system designs. Furthermore, the enhanced rationale exhibited in the responses to mathematical questions by LMMs, compared to vanilla LLMs, demonstrates the logical and mathematical reasoning capabilities inherent in LMMs. Building on those results, we present a sequel of open questions and challenges for LMMs. We then conclude with a set of recommendations that ignite the path towards LMM-empowered AI-native systems.