Goto

Collaborating Authors

 shutterstock


AI avatar generator Synthesia does video footage deal with Shutterstock

The Guardian

A 2bn ( 1.6bn) British startup that uses artificial intelligence to generate realistic avatars has struck a licensing deal with the stock footage firm Shutterstock to help develop its technology. Synthesia will pay the US-based Shutterstock an undisclosed sum to use its library of corporate video footage to train its latest AI model. It expects that incorporating the clips into its model will produce even more realistic expressions, vocal tones and body language from the avatars. "Thanks to this partnership with Shutterstock, we hope to try out new approaches that will … increase the realism and expressiveness of our AI generated avatars, bringing them closer to human-like performances," said Synthesia. Synthesia uses human actors to generate digital avatars of people, which are then deployed by companies in corporate videos in a range of scenarios such as advising on cybersecurity, calculating water bills and how to communicate better at work.


DP-RDM: Adapting Diffusion Models to Private Domains Without Fine-Tuning

Lebensold, Jonathan, Sanjabi, Maziar, Astolfi, Pietro, Romero-Soriano, Adriana, Chaudhuri, Kamalika, Rabbat, Mike, Guo, Chuan

arXiv.org Artificial Intelligence

Text-to-image diffusion models have been shown to suffer from sample-level memorization, possibly reproducing near-perfect replica of images that they are trained on, which may be undesirable. To remedy this issue, we develop the first differentially private (DP) retrieval-augmented generation algorithm that is capable of generating high-quality image samples while providing provable privacy guarantees. Specifically, we assume access to a text-to-image diffusion model trained on a small amount of public data, and design a DP retrieval mechanism to augment the text prompt with samples retrieved from a private retrieval dataset. Our \emph{differentially private retrieval-augmented diffusion model} (DP-RDM) requires no fine-tuning on the retrieval dataset to adapt to another domain, and can use state-of-the-art generative models to generate high-quality image samples while satisfying rigorous DP guarantees. For instance, when evaluated on MS-COCO, our DP-RDM can generate samples with a privacy budget of $\epsilon=10$, while providing a $3.5$ point improvement in FID compared to public-only retrieval for up to $10,000$ queries.


The future of AI video is here, super weird flaws and all

Washington Post - Technology News

This is the future of AI video. When videos like these are made completely by artificial intelligence. None of these videos depict real people, places or events. At first glance, the images amaze and confound: A woman strides along a city street alive with pedestrians and neon lights. A car kicks up a cloud of dust on a mountain road.


Cryptography may offer a solution to the massive AI-labeling problem

MIT Technology Review

Recently, as interest in AI detection and regulation has intensified, the project has been gaining steam; Andrew Jenks, the chair of C2PA, says that membership has increased 56% in the past six months. The major media platform Shutterstock has joined as a member and announced its intention to use the protocol to label all its AI-generated content, including its DALL-E-powered AI image generator. Sejal Amin, chief technology officer at Shutterstock, told MIT Technology Review in an email that the company is protecting artists and users by "supporting the development of systems and infrastructure that create greater transparency to easily identify what is an artist's creation versus AI-generated or modified art." Microsoft, Intel, Adobe, and other major tech companies started working on C2PA in February 2021, hoping to create a universal internet protocol that would allow content creators to opt in to labeling their visual and audio content with information about where it came from. Crucially, the project is designed to be adaptable and functional across the internet, and the base computer code is accessible and free to anyone.


Snowflake Introduces Manufacturing Data Cloud to Empower Industries with Data and AI

#artificialintelligence

Data and AI technology for manufacturing is having a moment. As AI has expanded its lengthy reach into manufacturing, there have been several purpose-built releases lately from companies like Nvidia and Databricks that are helping companies make sense of the deluge of data collected on everything from physical operations to the supply chain. Snowflake is now part of this action with the debut of its Manufacturing Data Cloud. The company says this new offering will enable companies in the automotive, technology, energy, and industrial sectors to tap into the value of siloed industrial data by leveraging Snowflake's data platform, partner solutions, and industry-specific datasets. The Snowflake Data Cloud provides a platform for data warehousing, SQL analytics, machine learning, data engineering, and monetization of third-party data.


chatgpt-more-useful-than-crypto-nvidia-tech-chief-says

#artificialintelligence

Unlike AI applications such as Chatgpt, cryptocurrencies do not bring "anything useful," a top executive of U.S. chip maker Nvidia is convinced. The comment comes despite his company making significant sales in the space where its powerful processors are widely used to mint digital coins. Cryptocurrencies do not "bring anything useful for society," according to a high-ranking representative of Nvidia, the leading manufacturer of graphics processing units (GPUs). The executive expressed this opinion despite his company selling quantities of video cards to the industry. Other uses of their processing power, such as those associated with artificial intelligence (AI) applications like the Chatgpt chatbot, are more worthwhile than mining crypto, Nvidia's Chief Technology Officer Michael Kagan told the Guardian.


Is there a way to pay content creators whose work is used to train AI? Yes, but it's not foolproof

AIHub

Is imitation the sincerest form of flattery, or theft? Perhaps it comes down to the imitator. Text-to-image artificial intelligence systems such as DALL-E 2, Midjourney and Stable Diffusion are trained on huge amounts of image data from the web. As a result, they often generate outputs that resemble real artists' work and style. It's safe to say artists aren't impressed. To further complicate things, although intellectual property law guards against the misappropriation of individual works of art, this doesn't extend to emulating a person's style.


A.I. Art Has a Big Problem, and It Isn't All the Weird Fingers

Slate

Last Monday, I began looking into why artificial intelligence is still so bad at creating hands. In recent weeks, lots of people have been sharing images that could be mistaken for photos of actual humans--until your eyes wander to the subjects' misshapen fingers. A.I.'s inability to create realistic hands is a long-standing issue, highlighting both that the technology needs refining and that fingers are extraordinary things. To compare various A.I. tools' hand skills, I entered this prompt into five different art generators: "A couple that has been together for 50 years holding hands after a fight." The hands were not stellar.


Are We Nearing the End of ML Modeling?

#artificialintelligence

Josh Tobin, the co-founder and CEO of machine learning tool provider Gantry, didn't want to believe it at first. But Tobin, who previously worked as a research scientist at OpenAI, eventually came to the conclusion that it was true: The end of traditional ML modeling is upon us. The idea that you didn't need to train a machine learning model anymore and can get better results by just using off-the-shelf models without any tuning on your own custom data seemed wrong to Tobin, who spent years learning how to build these systems. When he first heard of the idea after starting his ML tool business Gantry, which he co-founded in 2021 with fellow OpenAI alum Vicky Cheung, he didn't want to believe it. "The first four or five times I heard that, my thinking was like, okay, these companies just don't know what they're doing," Tobin said.


What Shutterstock's AI Image Generator Means for Users

#artificialintelligence

But Shutterstock found a way for people to utilize AI art generation in a far more ethical manner--a massive win for both artists and users. Keep reading to find out why. On 25 January 2023, Shutterstock released its very own AI image generator. The new feature came a few months after partnering with OpenAI, which is responsible for fueling the tool using DALL-E 2. But unlike other AI generation software, OpenAI trained DALL-E 2 with Shutterstock images and data, so that the end result is an image ready for licensing. Additionally, Shutterstock plans to compensate the artists whose images are used during the generative AI process by creating a cash fund and paying them royalties.