Goto

Collaborating Authors

 ctibench



CTIBench: A Benchmark for Evaluating LLMs in Cyber Threat Intelligence

Neural Information Processing Systems

Cyber threat intelligence (CTI) is crucial in today's cybersecurity landscape, providing essential insights to understand and mitigate the ever-evolving cyber threats. The recent rise of Large Language Models (LLMs) have shown potential in this domain, but concerns about their reliability, accuracy, and hallucinations persist. While existing benchmarks provide general evaluations of LLMs, there are no benchmarks that address the practical and applied aspects of CTI-specific tasks. To bridge this gap, we introduce CTIBench, a benchmark designed to assess LLMs' performance in CTI applications. CTIBench includes multiple datasets focused on evaluating knowledge acquired by LLMs in the cyber-threat landscape. Our evaluation of several state-of-the-art models on these tasks provides insights into their strengths and weaknesses in CTI contexts, contributing to a better understanding of LLM capabilities in CTI.


Supplementary material for CTIBench: A Benchmark for Evaluating LLMs in Cyber Threat Intelligence 1 Dataset Documentations 1 1.1 Hosted URLs

Neural Information Processing Systems

This task assesses the LLMs' ability to evaluate the severity of This task tests the LLMs' capability to The dataset consists of 5 TSV files, each corresponding to a different task. "Prompt" column used to pose questions to the LLM. Most files also include a "GT" column that Do not distribute. of LLMs to understand and analyze various aspects of open-source CTI. The dataset includes URLs indicating the sources from which the data was collected. A permanent DOI identifier is associated with the dataset: DOI: AI4Sec (2024).


CTIBench: A Benchmark for Evaluating LLMs in Cyber Threat Intelligence

Neural Information Processing Systems

Cyber threat intelligence (CTI) is crucial in today's cybersecurity landscape, providing essential insights to understand and mitigate the ever-evolving cyber threats. The recent rise of Large Language Models (LLMs) have shown potential in this domain, but concerns about their reliability, accuracy, and hallucinations persist. While existing benchmarks provide general evaluations of LLMs, there are no benchmarks that address the practical and applied aspects of CTI-specific tasks. To bridge this gap, we introduce CTIBench, a benchmark designed to assess LLMs' performance in CTI applications. CTIBench includes multiple datasets focused on evaluating knowledge acquired by LLMs in the cyber-threat landscape.


CTIBench: A Benchmark for Evaluating LLMs in Cyber Threat Intelligence

Alam, Md Tanvirul, Bhusal, Dipkamal, Nguyen, Le, Rastogi, Nidhi

arXiv.org Artificial Intelligence

Cyber threat intelligence (CTI) is crucial in today's cybersecurity landscape, providing essential insights to understand and mitigate the ever-evolving cyber threats. The recent rise of Large Language Models (LLMs) have shown potential in this domain, but concerns about their reliability, accuracy, and hallucinations persist. While existing benchmarks provide general evaluations of LLMs, there are no benchmarks that address the practical and applied aspects of CTI-specific tasks. To bridge this gap, we introduce CTIBench, a benchmark designed to assess LLMs' performance in CTI applications. CTIBench includes multiple datasets focused on evaluating knowledge acquired by LLMs in the cyber-threat landscape. Our evaluation of several state-of-the-art models on these tasks provides insights into their strengths and weaknesses in CTI contexts, contributing to a better understanding of LLM capabilities in CTI.