How Open Source is eating AI

#artificialintelligence 

By August, it had been cloned in the open by two master's students as OpenGPT-2 By November, OpenAI released their 1.5B parameter model, after a cautious staged release process May 2020: OpenAI released GPT-3 as a paper and a closed beta API in June 2020. Mar 2021: EleutherAI released their open GPT-Neo 1.3B and 2.7B models May 2022: Meta released OPT-175B for researchers (with logbook! and an open license) The Text-to-Image cycle took 4? months: Apr 2022: OpenAI announces DALL-E 2 with a limited "research preview" The timelines above are highly cherrypicked of course; the story is much longer if you take into account the longer development history starting from the academic papers for diffusion (2015) and transformer models (2017) and older work on GANs. But what is more interesting is what has happened since: OpenAI's audio-to-text model, Whisper, was released under MIT license in September with no API paywall. Of course, there is less scope for abuse in the audio-to-text domain, but more than a few people have speculated that the reception to Stable Diffusion's release influenced the open sourcing decision. Sufficiently advanced community is indistinguishable from magic.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found