Last week, while giving a commencement speech to New York University graduates, pop star Taylor Swift offered a timely bit of advice: "No matter how hard you try to avoid being cringe, you will look back on your life and cringe retrospectively. Cringe is unavoidable over a lifetime." We live in inescapably cringe-y times. Los Angeles-based writer K. Allado-McDowell's new novel, Amor Cringe, is a love letter to cringe maximalism. Allado-McDowell set out to write the cringiest story possible and ended up creating an odd, surprisingly funny little book.
Sonos, the wireless home-audio specialist, is launching a lower-cost model of its popular TV soundbars alongside its own new voice control system for its smart speakers after its public bust-up with Google. The new Ray soundbar is a more compact version of Sonos's popular Arc and Beam models, designed to fit neatly in TV stands without affecting sound quality. It connects to a TV through an optical cable, has wifi for streaming music and can be controlled with the Sonos app or a TV remote. The Ray will cost £279 in the UK or $279 in the US from 7 June, sitting below the £449 Beam as the firm's most affordable model. It has two tweeters and two midwoofer speakers, along with the company's Trueplay smart tuning system, promising balanced sound with solid bass and crisp dialogue.
It seems Sonos is gearing up to roll out its own long-rumored voice assistant in the coming weeks. Sonos Voice is said to offer voice control for music playback on many of the company's devices, offering owners another option if they'd rather not use Amazon Alexa and Google Assistant. Sonos will first roll out Sonos Voice in the US on June 1st as part of a software update, according to The Verge. The feature should arrive in other countries later. A rumored $250 soundbar called Sonos Ray will likely be among those.
Resonance, a powerful and pervasive phenomenon, appears to play a major role in human interactions. This article investigates the relationship between the physical mechanism of resonance and the human experience of resonance, and considers possibilities for enhancing the experience of resonance within human–robot interactions. We first introduce resonance as a widespread cultural and scientific metaphor. Then, we review the nature of “sympathetic resonance” as a physical mechanism. Following this introduction, the remainder of the article is organized in two parts. In part one, we review the role of resonance (including synchronization and rhythmic entrainment) in human cognition and social interactions. Then, in part two, we review resonance-related phenomena in robotics and artificial intelligence (AI). These two reviews serve as ground for the introduction of a design strategy and combinatorial design space for shaping resonant interactions with robots and AI. We conclude by posing hypotheses and research questions for future empirical studies and discuss a range of ethical and aesthetic issues associated with resonance in human–robot interactions.
It was a technological feat that made history, wowed audiences, and brought a dead rapper back to life. In April 2012 at the Coachella festival in California, Tupac Shakur took to the stage with Snoop Dogg and Dr Dre. Since he had been dead for 16 years by 2012, it was a human-like hologram of Tupac performing before a "shocked and then amazed" crowd. Fast forward ten tears and May 2022 will see the latest technological advances in musical immortality when Abba returns to the live stage after a 40-year absence. But this time, they return as humanoids – the digital hologram "twins" of the original global phenomenon.
Deep learning has radically transformed the fields of computer vision and natural language processing, in not just classification but also generative tasks, enabling the creation of unbelievably realistic pictures as well as artificially generated news articles. In this project, we aim to create novel neural network architectures to generate new music, using 20,000 MIDI samples of different genres from the Lakh Piano Dataset, a popular benchmark dataset for recent music generation tasks. This project was a group effort by Isaac Tham and Matthew Kim, senior-year undergraduates at the University of Pennsylvania. Music generation using deep learning techniques has been a topic of interest for the past two decades. Music proves to be a different challenge compared to images, among three main dimensions: Firstly, music is temporal, with a hierarchical structure with dependencies across time. Secondly, music consists of multiple instruments that are interdependent and unfold across time.
Elon Musk-backed Neuralink is teeing up for clinical trials on humans with a view to accomplish the human brain implant by the end of the year. However, this is just the beginning. Remember the movie Transcendence where Johnny Depp (Dr Will Caster) turns into a superintelligent AI. Well, the timeline to superintelligence is shortening. Last December, a 63-year-old Australian Philip O'Keefe, who is suffering from ALS, tweeted his thoughts with the help of a computer chip implanted in his brain.
Peloton today launched Lanebreak, a new series of workouts that mimic a racing game for its connected stationary bike. Riders get behind a virtual wheel, race down a multi-lane highway and gain points for higher levels of output and resistance. The fitness company briefly beta tested Lanebreak last July, and is now launching the new mode as a software update to all Peloton bikes in the US, UK, Canada, Germany and Australia. Instead, riders can choose from a selection of different pop-centric playlists to listen to in the background, featuring the likes of David Guetta, David Bowie, Bruno Mars and Ed Sheeran. For Peloton riders who are bored with the usual slate of instructor-led classes, Lanebreak adds a change of pace.
Melody choralization, i.e. generating a four-part chorale based on a user-given melody, has long been closely associated with J.S. Bach chorales. Previous neural network-based systems rarely focus on chorale generation conditioned on a chord progression, and none of them realised controllable melody choralization. To enable neural networks to learn the general principles of counterpoint from Bach's chorales, we first design a music representation that encoded chord symbols for chord conditioning. We then propose DeepChoir, a melody choralization system, which can generate a four-part chorale for a given melody conditioned on a chord progression. Furthermore, with the improved density sampling, a user can control the extent of harmonicity and polyphonicity for the chorale generated by DeepChoir. Experimental results reveal the effectiveness of our data representation and the controllability of DeepChoir over harmonicity and polyphonicity. The code and generated samples (chorales, folk songs and a symphony) of DeepChoir, and the dataset we use now are available at https://github.com/sander-wood/deepchoir.
We present a novel music generation framework for music infilling, with a user friendly interface. Infilling refers to the task of generating musical sections given the surrounding multi-track music. The proposed transformer-based framework is extensible for new control tokens as the added music control tokens such as tonal tension per bar and track polyphony level in this work. We explore the effects of including several musically meaningful control tokens, and evaluate the results using objective metrics related to pitch and rhythm. Our results demonstrate that adding additional control tokens helps to generate music with stronger stylistic similarities to the original music. It also provides the user with more control to change properties like the music texture and tonal tension in each bar compared to previous research which only provided control for track density. We present the model in a Google Colab notebook to enable interactive generation.