Goto

Collaborating Authors

Arc Raiders replaced some of its AI-generated voice lines, using professional actors instead

Engadget

Embark Studios' CEO Patrick Söderlund admitted that there is a quality difference when it comes to using voice actors versus AI. In an unexpected twist, humans have taken some jobs back from AI. Embark Studios' CEO Patrick Söderlund recently told that the studio re-recorded some of the AI-generated voice lines in with human voices, only after its successful launch in October. There is a quality difference, Söderlund told A real professional actor is better than AI; that's just how it is. With Arc Raiders' player count peaking at nearly half a million users on Steam, the game's breakout success was still marred by its use of text-to-speech AI. While there was no generative AI used for the visuals of the extraction shooter, Embark Studios paid its actors for approval to license their voices for text-to-speech AI, according to Söderlund.


Latent Gaussian Activity Propagation: Using Smoothness and Structure to Separate and Localize Sounds in Large Noisy Environments

Neural Information Processing Systems

We present an approach for simultaneously separating and localizing multiple sound sources using recorded microphone data. Inspired by topic models, our approach is based on a probabilistic model of inter-microphone phase differences, and poses separation and localization as a Bayesian inference problem. We assume sound activity is locally smooth across time, frequency, and location, and use the known position of the microphones to obtain a consistent separation. We compare the performance of our method against existing algorithms on simulated anechoic voice data and find that it obtains high performance across a variety of input conditions.


Iran arrests dozens accused of spying for Israel in new internal crackdown

FOX News

This material may not be published, broadcast, rewritten, or redistributed. Quotes displayed in real-time or delayed by at least 15 minutes. Market data provided by Factset . Powered and implemented by FactSet Digital Solutions . Mutual Fund and ETF data provided by LSEG .


Learning with SGD and Random Features

Neural Information Processing Systems

Sketching and stochastic gradient methods are arguably the most common techniques to derive efficient large scale learning algorithms. In this paper, we investigate their application in the context of nonparametric statistical learning. More precisely, we study the estimator defined by stochastic gradient with mini batches and random features. The latter can be seen as form of nonlinear sketching and used to define approximate kernel methods. The considered estimator is not explicitly penalized/constrained and regularization is implicit. Indeed, our study highlights how different parameters, such as number of features, iterations, step-size and mini-batch size control the learning properties of the solutions. We do this by deriving optimal finite sample bounds, under standard assumptions. The obtained results are corroborated and illustrated by numerical experiments.


Anthropic is doubling Claude's usage limits during off-peak hours for the next two weeks

Engadget

Anthropic is doubling Claude's usage limits during off-peak hours for the next two weeks The promotion runs from March 13 to March 27. To capitalize on Claude's recent spike in popularity, Anthropic is offering a limited-time promotion that doubles usage limits for anyone using its AI chatbot during off-peak hours. From March 13 to March 27, users on Free, Pro, Max, and Team plans will get double the usage limits in a five-hour window when using Claude outside weekday hours between 8 AM and 2 PM ET. According to Anthropic, the promotion is automatic, and users don't have to enable anything to get the benefits. A small thank you to everyone using Claude: We're doubling usage outside our peak hours for the next two weeks.



ChannelNets: Compact and Efficient Convolutional Neural Networks via Channel-Wise Convolutions

Neural Information Processing Systems

Convolutional neural networks (CNNs) have shown great capability of solving various artificial intelligence tasks. However, the increasing model size has raised challenges in employing them in resource-limited applications. In this work, we propose to compress deep models by using channel-wise convolutions, which replace dense connections among feature maps with sparse ones in CNNs. Based on this novel operation, we build light-weight CNNs known as ChannelNets.


Marine biologists spot rare blue whales off Massachusetts coast

Popular Science

The team observed the gentle giants two days in a row. Blue whales can be found in every ocean except the Arctic. Breakthroughs, discoveries, and DIY tips sent six days a week. As if soaring above the brilliant blue ocean isn't spectacular enough, the New England Aquarium's aerial survey team recently experienced two back-two-back sightings of blue whales --a little déjà blue, per the aquarium's clever social media post. The first sighting occurred on February 27, when scientists from the Aquarium's Anderson Cabot Center for Ocean Life spotted a blue whale ().



New Insight into Hybrid Stochastic Gradient Descent: Beyond With-Replacement Sampling and Convexity

Neural Information Processing Systems

As an incremental-gradient algorithm, the hybrid stochastic gradient descent (HSGD) enjoys merits of both stochastic and full gradient methods for finite-sum minimization problem. However, the existing rate-of-convergence analysis for HSGD is made under with-replacement sampling (WRS) and is restricted to convex problems. It is not clear whether HSGD still carries these advantages under the common practice of without-replacement sampling (WoRS) for non-convex problems. In this paper, we affirmatively answer this open question by showing that under WoRS and for both convex and non-convex problems, it is still possible for HSGD (with constant step-size) to match full gradient descent in rate of convergence, while maintaining comparable sample-size-independent incremental first-order oracle complexity to stochastic gradient descent. For a special class of finite-sum problems with linear prediction models, our convergence results can be further improved in some cases. Extensive numerical results confirm our theoretical affirmation and demonstrate the favorable efficiency of WoRS-based HSGD.