Goto

Collaborating Authors

 foley artist


A Proposal for Foley Sound Synthesis Challenge

Choi, Keunwoo, Oh, Sangshin, Kang, Minsung, McFee, Brian

arXiv.org Artificial Intelligence

We during post-production to enhance its perceived acoustic properties, review recent machine learning challenges in audio, speech, and e.g., by simulating the sounds of footsteps, ambient environmental music research in Section 2 and existing works and datasets in Section sounds, or visible objects on the screen. While foley is traditionally 3. In Section 4, we provide a proposal for foley sound synthesis produced by foley artists, there is increasing interest in automatic challenge that includes problem definition, datasets, and evaluation or machine-assisted techniques building upon recent advances in metrics. We conclude the paper in Section 5. sound synthesis and generative models. To foster more participation in this growing research area, we propose a challenge for automatic 2. CASE STUDY: RESEARCH CHALLENGES foley synthesis. Through case studies on successful previous challenges in audio and machine learning, we set the goals of In this section, we review five existing research challenges: Blizzard the proposed challenge: rigorous, unified, and efficient evaluation Challenge, CHiME, DCASE, Music Demixing challenge, and of different foley synthesis systems, with an overarching goal of AI Song Contest. The former three are relatively mature while the drawing active participation from the research community. We outline latter two started after 2020. All of them started along with the increasing the details and design considerations of a foley sound synthesis popularity of the research problems and have contributed challenge, including task definition, dataset requirements, and evaluation to the continued growth by defining the tasks, providing common criteria.


The Weird, Analog Delights of Foley Sound Effects

The New Yorker

This content can also be viewed on the site it originates from. The salvage yard at M. Maselli & Sons, in Petaluma, California, is made up of six acres of angle irons, block pulleys, doorplates, digging tools, motors, fencing, tubing, reels, spools, and rusted machinery. To the untrained eye, the place is a testament to the enduring power of American detritus, but to Foley artists--craftspeople who create custom sound effects for film, television, and video games--it's a trove of potential props. On a recent morning, Shelley Roden and John Roesch, Foley artists who work at Skywalker Sound, the postproduction audio division of Lucasfilm, stood in the parking lot, considering the sonic properties of an enormous industrial hopper. "I'm looking for a resonator, and I need more ka-chunkers," Roden, who is blond and in her late forties, said.


AI-generated sound effects are now fooling human ears

#artificialintelligence

If you'll permit us to spoil a little bit of movie magic, many of the sound effects you hear in film and TV are actually recreated and edited in later by Foley artists. Now, researchers are attempting to create sound effect-generating artificial intelligence to see if they can do their jobs well enough to fool the general population. In a recent study, a small cohort of participants fell for the trick: Most they believed that the AI-generated noises were real, IEEE Spectrum reports. Sometimes, they even chose the AI version over a video's original audio. In the study, which was published in June in the paper IEEE Transactions on Multimedia, 41 of the 53 participants were fooled by the AI-generated sounds.


MIT's New AI Can (Sort of) Fool Humans With Sound Effects

WIRED

Neural networks are already beating us at games, organizing our smartphone photos, and answering our emails. Eventually, they could be filling jobs in Hollywood. Over at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), a team of six researchers created a machine-learning system that matches sound effects to video clips. Before you get too excited, the CSAIL algorithm can't do its audio work on any old video, and the sound effects it produces are limited. For the project, CSAIL PhD student Andrew Owens and postgrad Phillip Isola recorded videos of themselves whacking a bunch of things with drumsticks: stumps, tables, chairs, puddles, banisters, dead leaves, the dirty ground.