In February, artificial intelligence research startup OpenAI announced the creation of GPT-2, an algorithm capable of writing impressively coherent paragraphs of text. But rather than release the AI in its entirety, the team shared only a smaller model out of fear that people would use the more robust tool maliciously -- to produce fake news articles or spam, for example. But on Tuesday, OpenAI published a blog post announcing its decision to release the algorithm in full as it has "seen no strong evidence of misuse so far." According to OpenAI's post, the company did see some "discussion" regarding the potential use of GPT-2 for spam and phishing, but it never actually saw evidence of anyone misusing the released versions of the algorithm. The problem might be that, while GPT-2 is one of -- if not the -- best text-generating AIs in existence, it still can't produce content that's indistinguishable from text written by a human.
OpenAI, the nonprofit artificial intelligence research company established last year with backing from several Silicon Valley figures, today announced its first product: a proving ground for algorithms for reinforcement learning, which involves training machines to do things based on trial and error. OpenAI is releasing tools you can run locally to test out algorithms in various "environments" -- including Atari games like Air Raid, Breakout, and Ms. Pacman -- and a Web service for sharing test results. The system automatically scores evaluations and also seeks to have results reviewed and reproduced by other people. "We originally built OpenAI Gym as a tool to accelerate our own RL research. We hope it will be just as useful for the broader community," OpenAI's Greg Brockman and John Schulman wrote in a blog post.
For several years, there has been a lot of discussion around AI's capabilities. Many believe that AI will outperform humans in solving certain areas. As the technology is in its infancy, researchers are expecting human-like autonomous systems in the next coming years. OpenAI has a leading stance in the artificial intelligence research space. Founded in December 2015, the company's goal is to advance digital intelligence in a way that can benefit humanity as a whole.
The academic discipline got its start in 1955 with the goal of creating machines capable of mimicking human cognitive function. Learning, problem solving, applying common sense, operating under conditions of ambiguity -- taken together, these traits form the basis for general intelligence, the long-standing goal of artificial intelligence. Since inception, AI research has experienced boom and bust cycles, fueled by an abundance of optimism followed by a collapse of funding. These setbacks have been so dramatic and so endemic to the field that they received their own neologism: AI winter. The two most dramatic winters occurred in the mid-to-late '70s and the mid-80s to mid-90s.
A new general language machine learning model is pushing the boundaries of what AI can do. Why it matters: OpenAI's GPT-3 system can reasonably make sense of and write human language. It's still a long way from genuine artificial intelligence, but it may be looked back on as the iPhone of AI, opening the door to countless commercial applications -- both benign and potentially dangerous. Driving the news: After announcing GPT-3 in a paper in May, OpenAI recently began offering a select group of people access to the system's API to help the nonprofit explore the AI's full capabilities. How it works: GPT-3 works the same way as predecessors like OpenAI's GPT-2 and Google's BERT -- analyzing huge swathes of the written internet and using that information to predict which words tend to follow after each other.