Goto

Collaborating Authors

OpenAI's DALL-E 2 produces fantastical images of most anything you can imagine

Engadget

In January, 2021, the OpenAI consortium -- founded by Elon Musk and financially backed by Microsoft -- unveiled its most ambitious project to date, the DALL-E machine learning system. This ingenious multimodal AI was capable of generating images (albeit, rather cartoonish ones) based on the attributes described by a user -- think "a cat made of sushi" or "an x-ray of a Capybara sitting in a forest." On Wednesday, the consortium unveiled DALL-E's next iteration which boasts higher resolution and lower latency than the original. The first DALL-E (a portmanteau of "Dali," as in the artist, and "WALL-E," as in the animated Disney character) could generate images as well as combine multiple images into a collage, provide varying angles of perspective, and even infer elements of an image -- such as shadowing effects -- from the written description. "Unlike a 3D rendering engine, whose inputs must be specified unambiguously and in complete detail, DALL·E is often able to'fill in the blanks' when the caption implies that the image must contain a certain detail that is not explicitly stated," the OpenAI team wrote in 2021.


OpenAI just released the AI it said was too dangerous to share

#artificialintelligence

In February, artificial intelligence research startup OpenAI announced the creation of GPT-2, an algorithm capable of writing impressively coherent paragraphs of text. But rather than release the AI in its entirety, the team shared only a smaller model out of fear that people would use the more robust tool maliciously -- to produce fake news articles or spam, for example. But on Tuesday, OpenAI published a blog post announcing its decision to release the algorithm in full as it has "seen no strong evidence of misuse so far." According to OpenAI's post, the company did see some "discussion" regarding the potential use of GPT-2 for spam and phishing, but it never actually saw evidence of anyone misusing the released versions of the algorithm. The problem might be that, while GPT-2 is one of -- if not the -- best text-generating AIs in existence, it still can't produce content that's indistinguishable from text written by a human.


Microsoft Releases Azure Open AI Service Including Access to Powerful GPT-3 Models

#artificialintelligence

At its recent Ignite conference, Microsoft announced the new Azure OpenAI Service in preview, allowing access to OpenAI's API through the Azure platform. This new Azure Cognitive Service will give customers access to OpenAI's powerful GPT-3 models, along with security, reliability, compliance, data privacy, and other enterprise-grade capabilities available through the Azure platform. Earlier, the company invested in OpenAI, founded initially as a non-profit open-source organization by several investors, including Tesla founder Elon Musk. And the OpenAI API is the first commercial product in the for-profit OpenAI LP entity, allowing developers to leverage the general-purpose model for natural language GPT-3. The model GPT-3 and its fine-tuned derivatives, such as Codex, can be tailored to handle applications requiring a deep understanding of language, such as converting natural language into software code, summarizing large amounts of text, and generating answers to questions.


CLIP: OpenAI's Multi-Modal Model

#artificialintelligence

How and why I got 75Gb of free foreign exchange "Tick" data. Understanding the Bias-Variance tradeoff at three different levels: simple, intermediate and advanced. Overlap is key to a good point cloud alignment.


OpenAI Releases An Improved Version Of Its Codex AI Model

#artificialintelligence

Today OpenAI is releasing a new and improved version of its Codex AI model to the public. Codex is a descendant of OpenAI's GPT-3, which was released last summer. While Codex shares the same data as its predecessor, it has an added advantage in that it can read and then complete text prompts submitted by a human user. The Codex is like the GPT-3 language engine, but it was only trained on coding. In the latest, OpenAI has made some big changes to Codex by now accepting commands in plain English as well. This allows someone who is building a game or web app without naming any variables whatsoever, and they get live working code back quickly with no hassle.