Prompt engineering and framework: implementation to increase code reliability based guideline for LLMs
Cruz, Rogelio, Contreras, Jonatan, Guerrero, Francisco, Rodriguez, Ezequiel, Valdez, Carlos, Carrillo, Citlali
–arXiv.org Artificial Intelligence
In this paper, we propose a novel prompting approach aimed at enhancing the ability of Large Language Models (LLMs) to generate accurate Python code. Specifically, we introduce a prompt template designed to improve the quality and correctness of generated code snippets, enabling them to pass tests and produce reliable results. Through experiments conducted on two state-of-the-art LLMs using the HumanEval dataset, we demonstrate that our approach outperforms widely studied zero-shot and Chain-of-Thought (CoT) methods in terms of the Pass@k metric. Furthermore, our method achieves these improvements with significantly reduced token usage compared to the CoT approach, making it both effective and resource-efficient, thereby lowering the computational demands and improving the eco-footprint of LLM capabilities. These findings highlight the potential of tailored prompting strategies to optimize code generation performance, paving the way for broader applications in AI-driven programming tasks.
arXiv.org Artificial Intelligence
Jun-16-2025
- Country:
- Europe > Ireland
- Leinster > County Dublin > Dublin (0.04)
- North America
- Dominican Republic (0.04)
- Mexico > Jalisco
- Guadalajara (0.04)
- United States > Florida
- Miami-Dade County > Miami (0.04)
- South America > Colombia
- Meta Department > Villavicencio (0.04)
- Europe > Ireland
- Genre:
- Research Report > New Finding (0.46)
- Technology: