Goto

Collaborating Authors

Artificial intelligence text-to-image tool may use its own 'secret language', experts claim

Daily Mail - Science & tech

An artificial intelligence (AI) tool that can transform famous paintings into different art styles, or create brand new artworks from a text prompt, may work by using a'secret language', experts claim. Text-to-image app DALL-E 2 was released by artificial intelligence lab OpenAI last month, and is able to create multiple realistic images and artwork from a single text prompt. It is also able to add objects into existing images, or even provide different points of view on an existing image. Now researchers believe they may have figured out how the technology works, after discovering that gibberish words produce specific pictures. Computer scientists used DALL-E 2 to generate images that contained text inside them, by asking for'captions' or'subtitles'.


When machine learning meets surrealist art meets Reddit, you get DALL-E mini

NPR Technology

An image of babies doing parkour generated by DALL-E mini. An image of babies doing parkour generated by DALL-E mini. DALL-E mini is the AI bringing to life all of the goofy "what if" questions you never asked: What if Voldemort was a member of Green Day? What if there was a McDonald's in Mordor? What if scientists sent a Roomba to the bottom of the Mariana Trench?


DALL-E 2 could become OpenAI's first money printing machine

#artificialintelligence

Interest in DALL-E 2 clearly exceeds that in previous OpenAI models. This seems relevant because it could be a first indication of the impact of DALL-E on the labor market. In mid-April, OpenAI unveiled DALL-E 2, a milestone in generative AI systems and probably in the history of artificial intelligence: It generates abstract drawings as well as photorealistic images based on individual sentences and phrases. It can even use photography metadata, such as lens and exposure time, to generate photos that look like they were snapped with the appropriate lens. For weeks, the first beta testers have been sharing their generated images on social media and in the first DALL-E 2 image databases. OpenAI has already achieved great success with the text AI GPT-3.


DALL-E 2 Made Its First Magazine Cover

#artificialintelligence

The group, composed of editors from Cosmopolitan, members of artificial-intelligence research lab OpenAI, and a digital artist--Karen X. Cheng, the first "real-world" person granted access to the computer system they're all using--are working together, with this system, to try to create the world's first magazine cover designed by artificial intelligence. Sure, there have been other stabs. AI has been around since the 1950s, and many publications have experimented with AI-created images as the technology has lurched and leaped forward over the past 70 years. Just last week, The Economist used an AI bot to generate an image for its report on the state of AI technology and featured that image as an inset on its cover. This Cosmo cover is the first attempt to go the whole nine yards. "It looks like Mary Poppins," says Mallory Roynon, creative director of Cosmopolitan, who appears unruffled by the fact that she's directing an algorithm to assist with one of the more important functions of her job.


OpenAI's DALL-E 2 produces fantastical images of most anything you can imagine

Engadget

In January, 2021, the OpenAI consortium -- founded by Elon Musk and financially backed by Microsoft -- unveiled its most ambitious project to date, the DALL-E machine learning system. This ingenious multimodal AI was capable of generating images (albeit, rather cartoonish ones) based on the attributes described by a user -- think "a cat made of sushi" or "an x-ray of a Capybara sitting in a forest." On Wednesday, the consortium unveiled DALL-E's next iteration which boasts higher resolution and lower latency than the original. The first DALL-E (a portmanteau of "Dali," as in the artist, and "WALL-E," as in the animated Disney character) could generate images as well as combine multiple images into a collage, provide varying angles of perspective, and even infer elements of an image -- such as shadowing effects -- from the written description. "Unlike a 3D rendering engine, whose inputs must be specified unambiguously and in complete detail, DALL·E is often able to'fill in the blanks' when the caption implies that the image must contain a certain detail that is not explicitly stated," the OpenAI team wrote in 2021.