vera
One of the Best Pop Horror Books of the Year Is by … This Guy?
If you've heard of Chuck Tingle, it's likely due to the wild titles of his extremely niche, parodic, self-published gay erotica, many of which have gone viral since Tingle became involved in the culture wars within the science-fiction community in the 2010s. And if that's so, you may be surprised to learn that Tingle has, over the past few years, evolved into one of the better horror/technothriller authors around. The mainstream publishing success of the author of This Handsome Sentient Baseball Hits a Home Run Into My Butt is one of the most unlikely, inspiring, and downright sweet underdog stories in book publishing, an industry not known for its abundance of sweet underdogs. Tingle's first taste of mass-culture fame came in 2016, when one of his short stories was nominated for a Hugo Award. The Hugos, presented every year at the World Science Fiction Convention (WorldCon), are among the genre's most prestigious prizes.
- North America > United States > Montana (0.14)
- Asia > Middle East > Syria > Damascus Governorate > Damascus (0.07)
- North America > United States > Utah (0.04)
- (3 more...)
- Leisure & Entertainment > Sports > Baseball (0.54)
- Health & Medicine > Therapeutic Area (0.47)
Federal judge restricts LAPD from targeting journalists with force at immigration protests
A'Fox News @ Night' panel gives their closing thoughts after the fourth night of anti-ICE protests in Los Angeles. A Los Angeles-based federal judge appointed by former President Joe Biden recently issued a temporary restraining order, restricting the Los Angeles Police Department (LAPD) from using less-lethal munitions (LLMs) on journalists covering immigration protests. The order, signed by Judge Hernan Vera on Thursday, also prevents the LAPD from detaining or restricting the movements of journalists. Vera cited at least 35 "troubling" incidents between June 6 and 19, where police allegedly exposed journalists to LLM, tear gas and other physical force to block them from covering conflict zones. Los Angeles Police Department (LAPD) officers move in on demonstrators in front of LA City Hall during a protest against federal immigration sweeps in downtown Los Angeles, California, on June 8, 2025.
- North America > United States > California > Los Angeles County > Los Angeles (1.00)
- Oceania > Australia (0.06)
- Media > News (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Government > Regional Government (1.00)
VERA: Variational Inference Framework for Jailbreaking Large Language Models
Lochab, Anamika, Yan, Lu, Pynadath, Patrick, Zhang, Xiangyu, Zhang, Ruqi
The rise of API-only access to state-of-the-art LLMs highlights the need for effective black-box jailbreak methods to identify model vulnerabilities in real-world settings. Without a principled objective for gradient-based optimization, most existing approaches rely on genetic algorithms, which are limited by their initialization and dependence on manually curated prompt pools. Furthermore, these methods require individual optimization for each prompt, failing to provide a comprehensive characterization of model vulnerabilities. To address this gap, we introduce VERA: Variational infErence fRamework for jAilbreaking. VERA casts black-box jailbreak prompting as a variational inference problem, training a small attacker LLM to approximate the target LLM's posterior over adversarial prompts. Once trained, the attacker can generate diverse, fluent jailbreak prompts for a target query without re-optimization. Experimental results show that VERA achieves strong performance across a range of target LLMs, highlighting the value of probabilistic inference for adversarial prompt generation.
- Europe > Monaco (0.04)
- Asia > Middle East > Jordan (0.04)
- Government (1.00)
- Information Technology > Security & Privacy (0.93)
- Transportation (0.89)
EigenLoRAx: Recycling Adapters to Find Principal Subspaces for Resource-Efficient Adaptation and Inference
Kaushik, Prakhar, Vaidya, Ankit, Chaudhari, Shravan, Yuille, Alan
The rapid growth of large models has raised concerns about their environmental impact and equity in accessibility due to significant computational costs. Low-Rank Adapters (LoRA) offer a lightweight solution for finetuning large models, resulting in an abundance of publicly available adapters tailored to diverse domains. We ask: Can these pretrained adapters be leveraged to further streamline adaptation to new tasks while addressing these challenges? We introduce EigenLoRAx, a parameter-efficient finetuning method that recycles existing adapters to create a principal subspace aligned with their shared domain knowledge which can be further augmented with orthogonal basis vectors in low-resource scenarios. This enables rapid adaptation to new tasks by learning only lightweight coefficients on the principal components of the subspace - eliminating the need to finetune entire adapters. EigenLoRAx requires significantly fewer parameters and memory, improving efficiency for both training and inference. Our method demonstrates strong performance across diverse domains and tasks, offering a scalable for edge-based applications, personalization, and equitable deployment of large models in resource-constrained environments.
- North America > United States (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- Europe > Switzerland (0.04)
- (3 more...)
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.93)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.68)
VERA: Explainable Video Anomaly Detection via Verbalized Learning of Vision-Language Models
Ye, Muchao, Liu, Weiyang, He, Pan
The rapid advancement of vision-language models (VLMs) has established a new paradigm in video anomaly detection (VAD): leveraging VLMs to simultaneously detect anomalies and provide comprehendible explanations for the decisions. Existing work in this direction often assumes the complex reasoning required for VAD exceeds the capabilities of pretrained VLMs. Consequently, these approaches either incorporate specialized reasoning modules during inference or rely on instruction tuning datasets through additional training to adapt VLMs for VAD. However, such strategies often incur substantial computational costs or data annotation overhead. To address these challenges in explainable VAD, we introduce a verbalized learning framework named VERA that enables VLMs to perform VAD without model parameter modifications. Specifically, VERA automatically decomposes the complex reasoning required for VAD into reflections on simpler, more focused guiding questions capturing distinct abnormal patterns. It treats these reflective questions as learnable parameters and optimizes them through data-driven verbal interactions between learner and optimizer VLMs, using coarsely labeled training data. During inference, VERA embeds the learned questions into model prompts to guide VLMs in generating segment-level anomaly scores, which are then refined into frame-level scores via the fusion of scene and temporal contexts. Experimental results on challenging benchmarks demonstrate that the learned questions of VERA are highly adaptable, significantly improving both detection performance and explainability of VLMs for VAD.
- North America > United States > Iowa (0.04)
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.04)
- Asia > India (0.04)
VERA: Validation and Enhancement for Retrieval Augmented systems
Birur, Nitin Aravind, Baswa, Tanay, Kumar, Divyanshu, Loya, Jatan, Agarwal, Sahil, Harshangi, Prashanth
Large language models (LLMs) exhibit remarkable capabilities but often produce inaccurate responses, as they rely solely on their embedded knowledge. Retrieval-Augmented Generation (RAG) enhances LLMs by incorporating an external information retrieval system, supplying additional context along with the query to mitigate inaccuracies for a particular context. However, accuracy issues still remain, as the model may rely on irrelevant documents or extrapolate incorrectly from its training knowledge. To assess and improve the performance of both the retrieval system and the LLM in a RAG framework, we propose \textbf{VERA} (\textbf{V}alidation and \textbf{E}nhancement for \textbf{R}etrieval \textbf{A}ugmented systems), a system designed to: 1) Evaluate and enhance the retrieved context before response generation, and 2) Evaluate and refine the LLM-generated response to ensure precision and minimize errors. VERA employs an evaluator-cum-enhancer LLM that first checks if external retrieval is necessary, evaluates the relevance and redundancy of the retrieved context, and refines it to eliminate non-essential information. Post-response generation, VERA splits the response into atomic statements, assesses their relevance to the query, and ensures adherence to the context. Our experiments demonstrate VERA's remarkable efficacy not only in improving the performance of smaller open-source models, but also larger state-of-the art models. These enhancements underscore VERA's potential to produce accurate and relevant responses, advancing the state-of-the-art in retrieval-augmented language modeling. VERA's robust methodology, combining multiple evaluation and refinement steps, effectively mitigates hallucinations and improves retrieval and response processes, making it a valuable tool for applications demanding high accuracy and reliability in information generation. .
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- Europe > Switzerland (0.04)
- Europe > Monaco (0.04)
- Research Report > New Finding (0.46)
- Research Report > Promising Solution (0.34)
Beyond LoRA: Exploring Efficient Fine-Tuning Techniques for Time Series Foundational Models
Gupta, Divij, Bhatti, Anubhav, Parmar, Surajsinh
Time Series Foundation Models (TSFMs) have recently garnered attention for their ability to model complex, large-scale time series data across domains such as retail, finance, and transportation. However, their application to sensitive, domain-specific fields like healthcare remains challenging, primarily due to the difficulty of fine-tuning these models for specialized, out-of-domain tasks with scarce publicly available datasets. In this work, we explore the use of Parameter-Efficient Fine-Tuning (PEFT) techniques to address these limitations, focusing on healthcare applications, particularly ICU vitals forecasting for sepsis patients. We introduce and evaluate two selective (BitFit and LayerNorm Tuning) and two additive (VeRA and FourierFT) PEFT techniques on multiple configurations of the Chronos TSFM for forecasting vital signs of sepsis patients. Our comparative analysis demonstrates that some of these PEFT methods outperform LoRA in terms of parameter efficiency and domain adaptation, establishing state-of-the-art (SOTA) results in ICU vital forecasting tasks. Interestingly, FourierFT applied to the Chronos (Tiny) variant surpasses the SOTA model while fine-tuning only 2,400 parameters compared to the 700K parameters of the benchmark.
- Health & Medicine > Therapeutic Area (1.00)
- Health & Medicine > Diagnostic Medicine > Vital Signs (0.35)
VERA: Validation and Evaluation of Retrieval-Augmented Systems
Ding, Tianyu, Banerjee, Adi, Mombaerts, Laurent, Li, Yunhong, Borogovac, Tarik, Weinstein, Juan Pablo De la Cruz
The increasing use of Retrieval-Augmented Generation (RAG) systems in various applications necessitates stringent protocols to ensure RAG systems accuracy, safety, and alignment with user intentions. In this paper, we introduce VERA (Validation and Evaluation of Retrieval-Augmented Systems), a framework designed to enhance the transparency and reliability of outputs from large language models (LLMs) that utilize retrieved information. VERA improves the way we evaluate RAG systems in two important ways: (1) it introduces a cross-encoder based mechanism that encompasses a set of multidimensional metrics into a single comprehensive ranking score, addressing the challenge of prioritizing individual metrics, and (2) it employs Bootstrap statistics on LLM-based metrics across the document repository to establish confidence bounds, ensuring the repositorys topical coverage and improving the overall reliability of retrieval systems. Through several use cases, we demonstrate how VERA can strengthen decision-making processes and trust in AI applications. Our findings not only contribute to the theoretical understanding of LLM-based RAG evaluation metric but also promote the practical implementation of responsible AI systems, marking a significant advancement in the development of reliable and transparent generative AI technologies.
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.05)
- North America > United States > Washington > King County > Seattle (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- (8 more...)
- Research Report > New Finding (1.00)
- Personal > Honors (1.00)
Combining Cognitive and Generative AI for Self-explanation in Interactive AI Agents
Sushri, Shalini, Dass, Rahul, Basappa, Rhea, Lu, Hong, Goel, Ashok
The Virtual Experimental Research Assistant (VERA) is an inquiry-based learning environment that empowers a learner to build conceptual models of complex ecological systems and experiment with agent-based simulations of the models. This study investigates the convergence of cognitive AI and generative AI for self-explanation in interactive AI agents such as VERA. From a cognitive AI viewpoint, we endow VERA with a functional model of its own design, knowledge, and reasoning represented in the Task--Method--Knowledge (TMK) language. From the perspective of generative AI, we use ChatGPT, LangChain, and Chain-of-Thought to answer user questions based on the VERA TMK model. Thus, we combine cognitive and generative AI to generate explanations about how VERA works and produces its answers. The preliminary evaluation of the generation of explanations in VERA on a bank of 66 questions derived from earlier work appears promising.
- North America > United States (0.14)
- Europe > United Kingdom > England > Greater London > London (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (4 more...)
- Research Report > New Finding (0.48)
- Research Report > Experimental Study (0.34)
- Education > Educational Setting > Online (0.68)
- Education > Educational Technology > Educational Software > Computer Based Training (0.46)
DoRA: Weight-Decomposed Low-Rank Adaptation
Liu, Shih-Yang, Wang, Chien-Yi, Yin, Hongxu, Molchanov, Pavlo, Wang, Yu-Chiang Frank, Cheng, Kwang-Ting, Chen, Min-Hung
Among the widely used parameter-efficient fine-tuning (PEFT) methods, LoRA and its variants have gained considerable popularity because of avoiding additional inference costs. However, there still often exists an accuracy gap between these methods and full fine-tuning (FT). In this work, we first introduce a novel weight decomposition analysis to investigate the inherent differences between FT and LoRA. Aiming to resemble the learning capacity of FT from the findings, we propose Weight-Decomposed Low-Rank Adaptation (DoRA). DoRA decomposes the pre-trained weight into two components, magnitude and direction, for fine-tuning, specifically employing LoRA for directional updates to efficiently minimize the number of trainable parameters. By employing \ours, we enhance both the learning capacity and training stability of LoRA while avoiding any additional inference overhead. \ours~consistently outperforms LoRA on fine-tuning LLaMA, LLaVA, and VL-BART on various downstream tasks, such as commonsense reasoning, visual instruction tuning, and image/video-text understanding. Code is available at https://github.com/NVlabs/DoRA.
- Europe > Austria > Vienna (0.14)
- Europe > Romania > Sud - Muntenia Development Region > Giurgiu County > Giurgiu (0.04)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.97)
- Information Technology > Artificial Intelligence > Vision (0.94)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.72)