Goto

Collaborating Authors

A few notes on OpenAI's "fake news–writing AI"

#artificialintelligence

This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI. Last week, artificial intelligence research lab OpenAI decided to release a more expanded version of GPT-2, the controversial text-generating AI model it first introduced in February. At the time, the lab refrained from releasing the full AI model, fearing it would be used for malicious purposes. Instead, OpenAI opted for a staged release of the AI, starting with a limited model (124 million parameters), and gradually releasing more capable models. In May, the research lab released the 355-million-parameter version of GPT-2, and last week, it finally released the 774-million-model, at 50 percent capacity of the text generator.


A.I. and the Future of Cheating

#artificialintelligence

No matter whether you were a straight-A student at university or more a student of beer pong, it's extremely unlikely that your positive memories of college took place in an examination hall. Beyond being generally miserable, exams exacerbate anxiety and other mental health issues, and do a poor job of assessing skills like critical thinking and creativity. Time-pressured tests are used as the key filter for several prestigious professions and universities and, some argue, for no apparent good reason. Given this sad state of affairs, it should be positive to see supervised exams and tests fall slowly out of vogue. Headmasters and professors have urged that more flexible, less time-pressured assessments like essays and written assignments should replace exams.


OpenAI's 'dangerous' AI text generator is out: People find GPT-2's words 'convincing' ZDNet

#artificialintelligence

OpenAI, the non-profit founded by Elon Musk in 2015 – he's no longer part of it – has released the biggest and final version of the GPT-2 text-generating language model, which it has admitted could be dangerous in the wrong hands. However, it says the newly released full model's output is only slightly more convincing to humans than the previous version. The organization released the first portion of the model in February as part of a staged process, beginning with just 124 million parameters. It held back the full model with 1.5 billion parameters because scientists believed it was too dangerous and could be used by malicious actors, such as terrorists and state-sponsored hackers. Among the malicious purposes for which OpenAI admitted GPT-2 might be used are generating misleading news articles, impersonating others online, automating the production of abusive or fake content for social media, and automating the creation of spam and phishing content.


OpenAI has published the text-generating AI it said was too dangerous to share

#artificialintelligence

The research lab OpenAI has released the full version of a text-generating AI system that experts warned could be used for malicious purposes. The institute originally announced the system, GPT-2, in February this year, but withheld the full version of the program out of fear it would be used to spread fake news, spam, and disinformation. Since then it's released smaller, less complex versions of GPT-2 and studied their reception. Others also replicated the work. In a blog post this week, OpenAI now says it's seen "no strong evidence of misuse" and has released the model in full.


OpenAI has published the text-generating AI it said was too dangerous to share

#artificialintelligence

The research lab OpenAI has released the full version of a text-generating AI system that experts warned could be used for malicious purposes. The institute originally announced the system, GPT-2, in February this year, but withheld the full version of the program out of fear it would be used to spread fake news, spam, and disinformation. Since then it's released smaller, less complex versions of GPT-2 and studied their reception. Others also replicated the work. In a blog post this week, OpenAI now says it's seen "no strong evidence of misuse" and has released the model in full.