Goto

Collaborating Authors

 veritas


2c23b3c72127e15fedc276722faee927-Paper-Conference.pdf

Neural Information Processing Systems

Aweakness toexisting approaches isthattheyignore thefactthatadversarial example generation is often a sequential task where multiple similar problems are being solved in a row. That is, one has access to a large number of "normal" examples each of which should be perturbed to elicit



VERITAS: Verifying the Performance of AI-native Transceiver Actions in Base-Stations

Soltani, Nasim, Loehning, Michael, Chowdhury, Kaushik

arXiv.org Artificial Intelligence

Artificial Intelligence (AI)-native receivers prove significant performance improvement in high noise regimes and can potentially reduce communication overhead compared to the traditional receiver. However, their performance highly depends on the representativeness of the training dataset. A major issue is the uncertainty of whether the training dataset covers all test environments and waveform configurations, and thus, whether the trained model is robust in practical deployment conditions. To this end, we propose a joint measurement-recovery framework for AI-native transceivers post deployment, called VERITAS, that continuously looks for distribution shifts in the received signals and triggers finite re-training spurts. VERITAS monitors the wireless channel using 5G pilots fed to an auxiliary neural network that detects out-of-distribution channel profile, transmitter speed, and delay spread. As soon as such a change is detected, a traditional (reference) receiver is activated, which runs for a period of time in parallel to the AI-native receiver. Finally, VERTIAS compares the bit probabilities of the AI-native and the reference receivers for the same received data inputs, and decides whether or not a retraining process needs to be initiated. Our evaluations reveal that VERITAS can detect changes in the channel profile, transmitter speed, and delay spread with 99%, 97%, and 69% accuracies, respectively, followed by timely initiation of retraining for 86%, 93.3%, and 94.8% of inputs in channel profile, transmitter speed, and delay spread test sets, respectively.


VERITAS: A Unified Approach to Reliability Evaluation

Ramamurthy, Rajkumar, Rajeev, Meghana Arakkal, Molenschot, Oliver, Zou, James, Rajani, Nazneen

arXiv.org Artificial Intelligence

Large language models (LLMs) often fail to synthesize information from their context to generate an accurate response. This renders them unreliable in knowledge intensive settings where reliability of the output is key. A critical component for reliable LLMs is the integration of a robust fact-checking system that can detect hallucinations across various formats. While several open-access fact-checking models are available, their functionality is often limited to specific tasks, such as grounded question-answering or entailment verification, and they perform less effectively in conversational settings. On the other hand, closed-access models like GPT-4 and Claude offer greater flexibility across different contexts, including grounded dialogue verification, but are hindered by high costs and latency. In this work, we introduce VERITAS, a family of hallucination detection models designed to operate flexibly across diverse contexts while minimizing latency and costs. VERITAS achieves state-of-the-art results considering average performance on all major hallucination detection benchmarks, with $10\%$ increase in average performance when compared to similar-sized models and get close to the performance of GPT4 turbo with LLM-as-a-judge setting.


Faster Repeated Evasion Attacks in Tree Ensembles

Cascioli, Lorenzo, Devos, Laurens, Kuželka, Ondřej, Davis, Jesse

arXiv.org Artificial Intelligence

Tree ensembles are one of the most widely used model classes. However, these models are susceptible to adversarial examples, i.e., slightly perturbed examples that elicit a misprediction. There has been significant research on designing approaches to construct such examples for tree ensembles. But this is a computationally challenging problem that often must be solved a large number of times (e.g., for all examples in a training set). This is compounded by the fact that current approaches attempt to find such examples from scratch. In contrast, we exploit the fact that multiple similar problems are being solved. Specifically, our approach exploits the insight that adversarial examples for tree ensembles tend to perturb a consistent but relatively small set of features. We show that we can quickly identify this set of features and use this knowledge to speedup constructing adversarial examples.


Initiative to promote responsible AI use in finance sector

#artificialintelligence

A new initiative is under way to help financial institutions promote the responsible adoption of artificial intelligence (AI) and data analytics. Veritas, as the initiative is called, will allow institutions to evaluate their AI-and data analytics-driven solutions against the principles of fairness, ethics, accountability and transparency. These principles were devised by the Monetary Authority of Singapore (MAS) and the financial industry last year. Veritas aims to provide financial institutions with a verifiable way to incorporate the principles into their AI and data analytics solutions. It will comprise open source tools that can be applied to different business lines, such as retail banking and corporate finance, and in different markets.


Now You Can Sequence Your Whole Genome for Just $200

WIRED

Here are a few things you can buy with $200: one bluetooth-controlled fire pit, 100 lab-grown Impossible White Castle sliders, access to the 6.4 billion base pairs that make up all the DNA coiled inside your cells. Starting today, Cambridge-based Veritas Genetics will be lowering its $999 whole genome sequencing and interpretation service for just $199 for two days, or to the first 1,000 people who buy spit kits. Why the dramatic price drop, which Veritas is taking at a loss? CEO Mirza Cifric says that it's more than just a holiday-season gimmick. "We're sending a clear signal to the medical research community that the $99 genome will be here in three to five years," he says.


Pfizer, Veritas, MGH Join Xconomy's Healthcare A.I. Conference on Nov. 2 Xconomy

#artificialintelligence

Like a lot of fields, healthcare is riding the A.I. wave. We're hearing about machine learning and other artificial intelligence techniques being applied to genomic analysis, drug discovery, imaging and diagnostics, patient-doctor interactions, and other clinical tasks. But there's a lot of hype and challenges to go along with it. On November 2, Xconomy is convening a special group of business and healthtech leaders to discuss the most important issues in this emerging sector. The half-day conference, called Healthcare A.I., is happening at Pfizer's offices in Cambridge, MA, and you can still snag a ticket here.


Veritas Genomics Scoops Up an AI Company to Sort Out Its *DNA*

WIRED

Genes carry the information that make you you. So it's fitting that, when sequenced and stored in a computer, your genome takes up gobs of memory--up to 150 gigabytes. Multiply that across all the people who have gotten sequenced, and you're looking at some serious storage issues. If that's not enough, mining those genomes for useful insight means comparing them all to each other, to medical histories, and to the millions of scientific papers about genetics. Sorting all that out is a perfect task for artificial intelligence.