Goto

Collaborating Authors

soundscape


5 Artificial Intelligence Applications for Editorial Post Teams

#artificialintelligence

Artificial intelligence applications in video post production come in many flavors: automatic transcription and metadata entry, speeding up lengthy color correction and VFX processes, and so much more. Fears of computational overreach, though, have become common in film and TV. Usually, these worries are a reaction to news of studios using performance prediction algorithms to make decisions about which films or TV series get made. These fears are overblown--at least where film and TV post production is concerned. We are in no danger of artificial intelligence replacing the most vital post-production creative processes.


Brutalist AI-generated buildings feature in hypnotic Moullinex music videos

#artificialintelligence

Lisbon musician Moullinex has shared an exclusive short music video showing an endlessly changing landscape of brutalist buildings drawn up by a generative design algorithm with Dezeen. Moullinex, whose real name is Luís Clara Gomes, created two videos that use artificial intelligence (AI) to imagine a series of brutalist buildings. The first video, which the artist shared on his Facebook page, is based on 200 photographs of modernist, concrete buildings. These images acted as the dataset, which was used to train a generative network via the machine learning tool StyleGAN2, to create a string of entirely non-existent buildings with similar characteristics. "It's akin to showing thousands of pictures of a cat to a child and then asking them to draw a brand new cat based on what they now know are cat-like characteristics," Gomes told Dezeen.


Grimes and Endel made an AI-powered lullaby

#artificialintelligence

Synth-pop artist and Cyberpunk 2077 cast member Grimes has teamed up with mood music startup Endel to create an AI-powered soundscape designed to help you drift off. "AI Lullaby" blends vocals and original music from Grimes with personalized sounds that an algorithm in real-time. Grimes told the New York Times she was partly inspired to work on the project while seeking "a better baby sleeping situation" for her young son, X Æ A-XII. "AI Lullaby" is available through Eldel's iOS app until December 23rd, and you'll be able to check it out via Android and Alexa before the end of the year. Proceeds from the soundscape will go towards two nonprofits: A.I. for Everyone and the Naked Heart Foundation.


AI helps raise awareness of the conservation crisis - SiliconANGLE

#artificialintelligence

Sound is all around us, every minute of our lives. Our first experience of the world comes from listening while inside the womb, and hearing is the last sense to fail when we die. Sound never ceases, not in the darkest night or the furthest reaches of the ocean deep. All these sounds are data, and (as we know) data is a resource that can be mined for insights -- in this case, insights that could help halt the mass extinction that scientific studies show is underway on planet Earth. "Our mission is to use sound as a lens to study the Earth, to capture it in ways that are meaningful, and to bring that back to the public to tell them a story about how the Earth exists," said Bryan Pijanowski (pictured), professor at Purdue University and director of the Human and Environment Modeling and Analysis Laboratory.


Renault Pilots Self-Driving Ride-Hailing Service PYMNTS.com

#artificialintelligence

Renault has started a public trial of its on-demand car service on the Paris-Saclay urban campus, according to a press release on Monday (Oct. A panel of around 100 people will use the service -- provided by two electric, autonomous and shared Renault Zoe Cab prototypes -- on the campus from Oct. 14 through Nov. 8. Two cars with different features will be tested, with passengers able to hail them using the mobile app, Marcel Saclay, which was designed specifically for the ZOE Cab experiment. Users can request a car on demand, or book it in advance. The cars will stop en route to pick up another passenger.


Somnox Review: Snuggling With a Robot Could Help You Fall Asleep

WIRED

Let's get this out of the way: I am sleeping with a robot. I hold it in my arms each night and feel its chest rise and fall against mine. Without arms to hold me back, it is forever my little spoon. Without a voice to bid me sweet dreams, it simply sits there, purring against me. The robot with which I sleep is called the Somnox.


Monitoring the shape of weather, soundscapes, and dynamical systems: a new statistic for dimension-driven data analysis on large data sets

arXiv.org Machine Learning

Dimensionality-reduction methods are a fundamental tool in the analysis of large data sets. These algorithms work on the assumption that the "intrinsic dimension" of the data is generally much smaller than the ambient dimension in which it is collected. Alongside their usual purpose of mapping data into a smaller dimension with minimal information loss, dimensionality-reduction techniques implicitly or explicitly provide information about the dimension of the data set. In this paper, we propose a new statistic that we call the $\kappa$-profile for analysis of large data sets. The $\kappa$-profile arises from a dimensionality-reduction optimization problem: namely that of finding a projection into $k$-dimensions that optimally preserves the secants between points in the data set. From this optimal projection we extract $\kappa,$ the norm of the shortest projected secant from among the set of all normalized secants. This $\kappa$ can be computed for any $k$; thus the tuple of $\kappa$ values (indexed by dimension) becomes a $\kappa$-profile. Algorithms such as the Secant-Avoidance Projection algorithm and the Hierarchical Secant-Avoidance Projection algorithm, provide a computationally feasible means of estimating the $\kappa$-profile for large data sets, and thus a method of understanding and monitoring their behavior. As we demonstrate in this paper, the $\kappa$-profile serves as a useful statistic in several representative settings: weather data, soundscape data, and dynamical systems data.


This AI has synesthesia

#artificialintelligence

This year, the DJ, artist, and Qosmo CEO Nao Tokui flipped the concept on its head. His project, Imaginary Soundscapes, is a convolutional neural network that hears sounds when it looks at images. Based on a given image, the software will choose from 15,000 sound files to find the "soundscape" that fits. First, he applied the software to Google StreetView to create an audio tour of the world, with AI-generated sounds to accompany any scene from StreetView, from echoing voices in Barcelona's Sagrada Familia cathedral to chirping birds on rural backroads. Viewers could "immerse themselves into the artificial soundscape'imagined' by our deep learning models," Tokui explained on Medium.


BirdCLEF 2018 ImageCLEF / LifeCLEF - Multimedia Retrieval in CLEF

#artificialintelligence

As in 2017, two scenarios will be evaluated, (i) the identification of a particular bird specimen in a recording of it, and (ii), the recognition of all specimens singing in a long sequence (up to one hour) of raw soundscapes that can contain tens of birds singing simultaneously. The first scenario is aimed at developing new interactive identification tools, to help user and expert who is today equipped with a directional microphone and spend too much time observing and listening the birds to assess their population on the field. The soundscapes, on the other side, correspond to a passive monitoring scenario in which any multi-directional audio recording device could be used without or with very light user's involvement, and thus efficient biodiversity assessment. The goal of the task is to identify the species of the most audible bird (i.e. the one that was intended to be recorded) in each of the provided test recordings. Therefore, the evaluated systems have to return a ranked list of possible species for each of the 12,347 test recordings.


Imaginary Soundscape -- Take a walk in soundscapes "imagined" by AI

#artificialintelligence

These interests in the soundscape and fantasizing AI led to my latest project, "Imaginary Soundscape". As I wrote, one can imagine scenes from a sound. Conversely, by taking a glance at a photo, we can imagine sounds we might hear if we were there. Can an AI system do the same? If so, what if we apply the method to images of Google Street View, so that we can walk around with the generated soundscape?