ChatGPT can be used to generate malicious code, finds research

#artificialintelligence 

OpenAI's ChatGPT, the large language model (LLM)-based artificial intelligence (AI) text generator, can be seemingly used to generate code for malicious tasks, a research note by cyber security firm Check Point observed on Tuesday. Researchers at Check Point used ChatGPT and Codex, a fellow OpenAI natural language to code generator, used standard English instructions to create code that can be used to launch spear phishing attacks. The biggest issue with such AI code generators lie in the fact that the natural language processing (NLP) tools can lower the entry barrier for hackers with malicious intent. With the code generators not needing users to be well versed with coding, any user can collate the logical flow of information that is used in a malicious tool from the open web, and use the same logic to generate syntax for malicious tools. Demonstrating the issue, Check Point showcased how the AI code generator was used to create a basic code template for a phishing email scam, and apply subsequent instructions in plain English to keep improving the code.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found