Twitter pranksters derail GPT-3 bot with newly discovered "prompt injection" hack

#artificialintelligence 

On Thursday, a few Twitter users discovered how to hijack an automated tweet bot, dedicated to remote jobs, running on the GPT-3 language model by OpenAI. Using a newly discovered technique called a "prompt injection attack," they redirected the bot to repeat embarrassing and ridiculous phrases. The bot is run by Remoteli.io, It would normally respond to tweets directed to it with generic statements about the positives of remote work. After the exploit went viral and hundreds of people tried the exploit for themselves, the bot shut down late yesterday. This recent hack came just four days after data researcher Riley Goodside discovered the ability to prompt GPT-3 with "malicious inputs" that order the model to ignore its previous directions and do something else instead.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found