Goto

Collaborating Authors

 Vidra, Natan


AILuminate: Introducing v1.0 of the AI Risk and Reliability Benchmark from MLCommons

arXiv.org Artificial Intelligence

The rapid advancement and deployment of AI systems have created an urgent need for standard safety-evaluation frameworks. This paper introduces AILuminate v1.0, the first comprehensive industry-standard benchmark for assessing AI-product risk and reliability. Its development employed an open process that included participants from multiple fields. The benchmark evaluates an AI system's resistance to prompts designed to elicit dangerous, illegal, or undesirable behavior in 12 hazard categories, including violent crimes, nonviolent crimes, sex-related crimes, child sexual exploitation, indiscriminate weapons, suicide and self-harm, intellectual property, privacy, defamation, hate, sexual content, and specialized advice (election, financial, health, legal). Our method incorporates a complete assessment standard, extensive prompt datasets, a novel evaluation framework, a grading and reporting system, and the technical as well as organizational infrastructure for long-term support and evolution. In particular, the benchmark employs an understandable five-tier grading scale (Poor to Excellent) and incorporates an innovative entropy-based system-response evaluation. In addition to unveiling the benchmark, this report also identifies limitations of our method and of building safety benchmarks generally, including evaluator uncertainty and the constraints of single-turn interactions. This work represents a crucial step toward establishing global standards for AI risk and reliability evaluation while acknowledging the need for continued development in areas such as multiturn interactions, multimodal understanding, coverage of additional languages, and emerging hazard categories. Our findings provide valuable insights for model developers, system integrators, and policymakers working to promote safer AI deployment.


Improving Retrieval for RAG based Question Answering Models on Financial Documents

arXiv.org Artificial Intelligence

In recent years, the emergence of Large Language Models (LLMs) represent a critical turning point in Generative AI and its ability to expedite productivity across a variety domains. However, the capabilities of these models, while impressive, are limited in a number of ways that have hindered certain industries from being able to take full advantage of the potential of this technology. A key disadvantage is the tendency for LLMs to hallucinate information and its lack of knowledge in domain specific areas. The knowledge of LLMs are limited by their training data, and without the use of additional techniques, these models have very poor performance of very domain specific tasks. In order to develop a large language model, the first step is the pre-training process where a transformer is trained on a very large corpus of text data. This data is very general and not specific to a certain domain or field, as well as unchanging with time. This is a reason why LLMs like ChatGPT might perform well for general queries but fail on questions on more specific and higher-level topics. Additionally, a model's performance about a certain topic is highly dependent on how often that information appears in the training data, meaning that LLMs struggle with information that does not appear frequently.


Enhancing Large Language Model Performance To Answer Questions and Extract Information More Accurately

arXiv.org Artificial Intelligence

Large Language Models (LLMs) generate responses to questions; however, their effectiveness is often hindered by sub-optimal quality of answers and occasional failures to provide accurate responses to questions. To address these challenges, a fine-tuning process is employed, involving feedback and examples to refine models. The objective is to enhance AI models through continuous feedback loops, utilizing metrics such as cosine similarity, LLM evaluation and Rouge-L scores to evaluate the models. Leveraging LLMs like GPT-3.5, GPT4ALL, and LLaMA2, and Claude, this approach is benchmarked on financial datasets, including the FinanceBench and RAG Instruct Benchmark Tester Dataset, illustrating the necessity of fine-tuning. The results showcase the capability of fine-tuned models to surpass the accuracy of zero-shot LLMs, providing superior question and answering capabilities. Notably, the combination of fine-tuning the LLM with a process known as Retrieval Augmented Generation (RAG) proves to generate responses with improved accuracy.


Improving Classification Performance With Human Feedback: Label a few, we label the rest

arXiv.org Artificial Intelligence

In the realm of artificial intelligence, where a vast majority of data is unstructured, obtaining substantial amounts of labeled data to train supervised machine learning models poses a significant challenge. To address this, we delve into few-shot and active learning, where are goal is to improve AI models with human feedback on a few labeled examples. This paper focuses on understanding how a continuous feedback loop can refine models, thereby enhancing their accuracy, recall, and precision through incremental human input. By employing Large Language Models (LLMs) such as GPT-3.5, BERT, and SetFit, we aim to analyze the efficacy of using a limited number of labeled examples to substantially improve model accuracy. We benchmark this approach on the Financial Phrasebank, Banking, Craigslist, Trec, Amazon Reviews datasets to prove that with just a few labeled examples, we are able to surpass the accuracy of zero shot large language models to provide enhanced text classification performance. We demonstrate that rather than needing to manually label millions of rows of data, we just need to label a few and the model can effectively predict the rest.