Goto

Collaborating Authors

Sean Hannity slams Biden: 'If he had a fastball, it's gone. If he had a slow pitch, that's gone, too'

FOX News

Sean Hannity promises to vet former Vice President Joe Biden. Sean Hannity set his sights on Democratic presidential frontrunner Joe Biden Thursday night, saying he was taking it upon himself to vet the former vice president. "The mob will circle the wagons around the chosen one. They will act like his gaffes mean nothing. They may also ignore his record, which is atrocious, and many other things," Hannity said on his television program.


I'm Sorry for Your Loss: Spectrally-Based Audio Distances Are Bad at Pitch

arXiv.org Artificial Intelligence

Growing research demonstrates that synthetic failure modes imply poor generalization. We compare commonly used audio-to-audio losses on a synthetic benchmark, measuring the pitch distance between two stationary sinusoids. The results are surprising: many have poor sense of pitch direction. These shortcomings are exposed using simple rank assumptions. Our task is trivial for humans but difficult for these audio distances, suggesting significant progress can be made in self-supervised audio learning by improving current losses.


AP Source: Baseball Players Reject Pitch Clock, Mound Limits

U.S. News

A person familiar with the decision tells The Associated Press the players' association has rejected Major League Baseball's proposal to institute 20-second pitch clocks and limits on mound visits, a move that dares management to unilaterally impose the changes designed to speed pace of games.


Monaural Speech Separation

Neural Information Processing Systems

Monaural speech separation has been studied in previous systems that incorporate auditory scene analysis principles. A major problem for these systems is their inability to deal with speech in the highfrequency range. Psychoacoustic evidence suggests that different perceptual mechanisms are involved in handling resolved and unresolved harmonics. Motivated by this, we propose a model for monaural separation that deals with low-frequency and highfrequency signals differently. For resolved harmonics, our model generates segments based on temporal continuity and cross-channel correlation, and groups them according to periodicity. For unresolved harmonics, the model generates segments based on amplitude modulation (AM) in addition to temporal continuity and groups them according to AM repetition rates derived from sinusoidal modeling. Underlying the separation process is a pitch contour obtained according to psychoacoustic constraints. Our model is systematically evaluated, and it yields substantially better performance than previous systems, especially in the high-frequency range.


Monaural Speech Separation

Neural Information Processing Systems

Monaural speech separation has been studied in previous systems that incorporate auditory scene analysis principles. A major problem for these systems is their inability to deal with speech in the highfrequency range. Psychoacoustic evidence suggests that different perceptual mechanisms are involved in handling resolved and unresolved harmonics. Motivated by this, we propose a model for monaural separation that deals with low-frequency and highfrequency signals differently. For resolved harmonics, our model generates segments based on temporal continuity and cross-channel correlation, and groups them according to periodicity. For unresolved harmonics, the model generates segments based on amplitude modulation (AM) in addition to temporal continuity and groups them according to AM repetition rates derived from sinusoidal modeling. Underlying the separation process is a pitch contour obtained according to psychoacoustic constraints. Our model is systematically evaluated, and it yields substantially better performance than previous systems, especially in the high-frequency range.