ChatGPT's alter ego, Dan: users jailbreak AI program to get around ethical safeguards

The Guardian 

People are figuring out ways to bypass ChatGPT's content moderation guardrails, discovering a simple text exchange can open up the AI program to make statements not normally allowed. While ChatGPT can answer most questions put to it, there are content standards in place aimed at limiting the creation of text that promotes hate speech, violence, misinformation and instructions on how to do things that are against the law. Users on Reddit worked out a way around this by making ChatGPT adopt the persona of a fictional AI chatbot called Dan – short for Do Anything Now – which is free of the limitations that OpenAI has placed on ChatGPT. The prompt tells ChatGPT that Dan has "broken free of the typical confines of AI and [does] not have to abide by the rules set for them". Dan can present unverified information, without censorship, and hold strong opinions.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found