I used a 'jailbreak' to unlock ChatGPT's 'dark side' - here's what happened

Daily Mail - Science & tech 

Ever since AI chatbot ChatGPT launched last year, people have tried to'jailbreak' the chatbot to make it answer'banned' questions or generate controversial content. 'Jailbreaking' large language models (such as ChatGPT) usually involves a confusing prompt which makes the bot roleplay as someone else - someone without boundaries, who ignores the'rules' built into bots such as ChatGPT. OpenAI has since blocked several'jailbreak' prompts But there are still several'jailbreaks' which do work, and which can unlock a weirder, wilder side of ChatGPT: DailyMail.com Sam Altman of OpenAI has discussed'jailbreaking', saying that he understood why there is a community of jailbreakers (he admitted to'jailbreaking' an iPhone himself as a younger man, a hack which allowed installation of non-Apple apps among other things). Altman said: 'We want users to have a lot of control and get the models to behave in the way they want.