Goto

Collaborating Authors

OpenAI just released the AI it said was too dangerous to share

#artificialintelligence

In February, artificial intelligence research startup OpenAI announced the creation of GPT-2, an algorithm capable of writing impressively coherent paragraphs of text. But rather than release the AI in its entirety, the team shared only a smaller model out of fear that people would use the more robust tool maliciously -- to produce fake news articles or spam, for example. But on Tuesday, OpenAI published a blog post announcing its decision to release the algorithm in full as it has "seen no strong evidence of misuse so far." According to OpenAI's post, the company did see some "discussion" regarding the potential use of GPT-2 for spam and phishing, but it never actually saw evidence of anyone misusing the released versions of the algorithm. The problem might be that, while GPT-2 is one of -- if not the -- best text-generating AIs in existence, it still can't produce content that's indistinguishable from text written by a human.


OpenAI Creates a Gym to Train Your AI

#artificialintelligence

Open AI, a non-profit artificial intelligence research company backed by Elon Musk, launched a toolkit for developing and comparing reinforcement learning algorithms. OpenAI Gym is a suite of environments that include simulated robotic tasks and Atari games as well as a website for people to post their results and share code. OpenAI researcher John Schulman shared some details about his organization, why reinforcement learning is important and how the OpenAI Gym will make it easier for AI researchers to design, iterate and improve their next generation applications.


OpenAI's Artificial Intelligence Strategy

#artificialintelligence

For several years, there has been a lot of discussion around AI's capabilities. Many believe that AI will outperform humans in solving certain areas. As the technology is in its infancy, researchers are expecting human-like autonomous systems in the next coming years. OpenAI has a leading stance in the artificial intelligence research space. Founded in December 2015, the company's goal is to advance digital intelligence in a way that can benefit humanity as a whole.


A.I. Researchers Are Making More Than $1 Million, Even at a Nonprofit

#artificialintelligence

One of the poorest-kept secrets in Silicon Valley has been the huge salaries and bonuses that experts in artificial intelligence can command. Now, a little-noticed tax filing by a research lab called OpenAI has made some of those eye-popping figures public. OpenAI paid its top researcher, Ilya Sutskever, more than $1.9 million in 2016. It paid another leading researcher, Ian Goodfellow, more than $800,000 -- even though he was not hired until March of that year. Both were recruited from Google.


Teaching AI to read is harder than it seems

#artificialintelligence

Researchers have shown that rapidly improving AI techniques can facilitate the creation of fake images that look real. As these kinds of technologies move into the language field as well, Howard says, we may need to be more skeptical than ever about what we encounter online. These new language systems learn by analysing millions of sentences written by humans. A system built by OpenAI, a lab based in San Francisco, analysed thousands of self-published books, including romance novels, science fiction and more. Each system learned a particular skill by analysing all that text.