Securing Large Language Models (LLMs) from Prompt Injection Attacks