OpenAI just released the AI it said was too dangerous to share


In February, artificial intelligence research startup OpenAI announced the creation of GPT-2, an algorithm capable of writing impressively coherent paragraphs of text. But rather than release the AI in its entirety, the team shared only a smaller model out of fear that people would use the more robust tool maliciously -- to produce fake news articles or spam, for example. But on Tuesday, OpenAI published a blog post announcing its decision to release the algorithm in full as it has "seen no strong evidence of misuse so far." According to OpenAI's post, the company did see some "discussion" regarding the potential use of GPT-2 for spam and phishing, but it never actually saw evidence of anyone misusing the released versions of the algorithm. The problem might be that, while GPT-2 is one of -- if not the -- best text-generating AIs in existence, it still can't produce content that's indistinguishable from text written by a human.

Duplicate Docs Excel Report

None found

Similar Docs  Excel Report  more

None found