ChatGPT wrote code that can make databases leak sensitive information

New Scientist 

A vulnerability in Open AI's ChatGPT – now fixed – could have been used by malicious actors Researchers manipulated ChatGPT and five other commercial AI tools to create malicious code that could leak sensitive information from online databases, delete critical data or disrupt database cloud services in a first-of-its-kind demonstration. The work has already led the companies responsible for some of the AI tools – including Baidu and OpenAI – to implement changes to prevent malicious users from taking advantage of the vulnerabilities. "It's the very first study to demonstrate that vulnerabilities of large language models in general can be exploited as an attack path to online commercial applications," says Xutan Peng, who co-led the study while at the University of Sheffield in the UK. Peng and his colleagues looked at six AI services that can translate human questions into the SQL programming language, which is commonly used to query computer databases. "Text-to-SQL" systems that rely on AI have become increasingly popular – even standalone AI chatbots, such as OpenAI's ChatGPT, can generate SQL code that can be plugged into such databases.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found