Goto

Collaborating Authors

 nivå


AI voice synthesising is being hailed as the future of video games – but at what cost?

The Guardian

When the epic open-world PlayStation 4 game Red Dead Redemption 2 was developed in 2013, it took 2,200 days to record the 1,200 voices in the game with 700 voice actors, who recited the 500,000 lines of dialogue. It was a massive feat that is nearly impossible for any other studio to replicate – let alone a games studio smaller than Rockstar Games. But with advances in artificial intelligence it is becoming easier and easier to recreate human voices to create automated real-time responses, near limitless dialogue options and speech tailored to a user's unique input. But the technology raises questions about the ethics of synthesising voices. The Australian software developer Replica Studios rolled out a voice synthesiser platform for games developers in 2019 – a tool used by Australian games developer PlaySide Studios in their game Age of Darkness: Final Stand.


Could The Simpsons replace its voice actors with AI deepfakes?

#artificialintelligence

In May 2015, The Simpsons voice actor Harry Shearer – who plays a number of key characters including, quite incredibly, both Mr Burns and Waylon Smithers – announced that he was leaving the show. By then, the animated series had been running for more than 25 years, and the pay of its vocal cast had risen from $30,000 an episode in 1998 to $400,000 an episode from 2008 onwards. But Fox, the producer of The Simpsons, was looking to cut costs – and was threatening to cancel the series unless the voice actors took a 30 per cent pay cut. Most of them agreed, but Shearer (who had been critical of the show's declining quality) refused to sign – after more than two decades, he wanted to break out of the golden handcuffs, and win back the freedom and the time to pursue his own work. Showrunner Al Jean said Shearer's iconic characters – who also include Principal Skinner, Ned Flanders and Otto Mann – would be recast.


'Photoshop for voice': Meet the Brisbane startup creating an global marketplace for our voices - SmartCompany

#artificialintelligence

A Brisbane startup pitching itself as "Photoshop for voice" has taken the wraps off its first product, a creative studio powered by an AI able to perfectly replicate someone's voice. Fittingly named Replica, the startup is headed up by founding trio Shreyas Nivas, Riccardo Grinover and Keni Mardira, and has been running in a stealth mode of sorts for the past two years, while the three were building the product. With each founder having varying areas of expertise, Nivas -- who holds a degree in aerospace engineering -- tells StartupSmart the team was brought together through a love for games, and inspired by research into voice synthesis coming out of Google. "Around late 2016, Google released a paper on WaveNet, which was a neural network for voice synthesis which could create human-like speech. It was a natural transition from text-to-speech, and it was almost indistinguishable from actual conversation," he says.


Knowledge Graph Embedding for Ecotoxicological Effect Prediction

Myklebust, Erik B., Jimenez-Ruiz, Ernesto, Chen, Jiaoyan, Wolf, Raoul, Tollefsen, Knut Erik

arXiv.org Artificial Intelligence

Exploring the effects a chemical compound has on a species takes a considerable experimental effort. Appropriate methods for estimating and suggesting new effects can dramatically reduce the work needed to be done by a laboratory. In this paper we explore the suitability of using a knowledge graph embedding approach for ecotoxicological effect prediction. A knowledge graph has been constructed from publicly available data sets, including a species taxonomy and chemical classification and similarity. The publicly available effect data is integrated to the knowledge graph using ontology alignment techniques. Our experimental results show that the knowledge graph based approach improves the selected baselines.