GPT Chat and the weaponization of disinformation

#artificialintelligence 

The team behind GPT's new Chatbot has clearly done what they can to stop it spreading disinformation, but it is also quite clear at a level where we can say that that any nefarious commercial or governmental organization who wanted to weaponize these technologies for disinformation absolutely could. The first thing that GPT Chat demonstrates is a real confidence it its error, which is what we should expect from a machine. This is perfect troll behaviour. Not simply getting something wrong, but then (incorrectly) linking some elements to reinforce its point. Now with GPT Chat this is accidental, but it shows how you could easily bias the training data to support a proscribed position.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found