Evaluating LLMs and Prompting Strategies for Automated Hardware Diagnosis from Textual User-Reports

Caminha, Carlos, Silva, Maria de Lourdes M., Chaves, Iago C., Brito, Felipe T., Farias, Victor A. E., Machado, Javam C.

arXiv.org Artificial Intelligence 

Computer manufacturers offer platforms for users to describe device faults using textual reports such as "My screen is flickering". Identifying the faulty component from the report is essential for automating tests and improving user experience. However, such reports are often ambiguous and lack detail, making this task challenging. Large Language Models (LLMs) have shown promise in addressing such issues. This study evaluates 27 open-source models (1B-72B parameters) and 2 proprietary LLMs using four prompting strategies: Zero-Shot, Few-Shot, Chain-of-Thought (CoT), and CoT+Few-Shot (CoT+FS). W e conducted 98,948 inferences, processing over 51 million input tokens and generating 13 million output tokens. W e achieve f1-score up to 0.76. Results show that three models offer the best balance between size and performance: mistral-small-24b-instruct and two smaller models, llama-3.2-1b-instruct

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found