hershey
They're sweets, but not as you know them - why freeze-dried candy is trending
What are freeze-dried sweets and why are they popular? When Savannah Louise West first tasted freeze-dried gummies, she was intrigued. I think the crunch is so satisfying, and I find it interesting to experience a candy I'm familiar with that has an entirely new texture, says the Toronto resident. Ms West is describing one of the main features of this spin-off candy that independent and major confectionary manufacturers have been releasing onto shelves, both online and offline, for the past three years. It's been largely a US phenomena, hence we'll use the US term candy, but for our UK readers, we're talking about sweets here.
- North America > Canada > Ontario > Toronto (0.25)
- North America > Central America (0.15)
- Oceania > Australia (0.05)
- (20 more...)
- Leisure & Entertainment (0.72)
- Media (0.48)
How WWII made Hershey and Mars Halloween candy kings
From sugar shortages to military contracts, World War II helped make M&Ms and Hershey's bars into symbols of American abundance. A 1940s Milky Way ad shows candy keeping pilots smiling through the war. Breakthroughs, discoveries, and DIY tips sent every weekday. Every year, Hershey manufactures 373 million of its signature milk chocolate bars . While the company doesn't release exact stats on Halloween sales, you can bet a lot of those end up in plastic Jack O'Lantern-shaped pails.
- Europe (0.05)
- Oceania > Northern Mariana Islands > Saipan > Saipan (0.05)
- North America > United States > California (0.05)
- (3 more...)
- Health & Medicine > Therapeutic Area (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)
Anthropic's New Model Excels at Reasoning and Planning--and Has the Pokémon Skills to Prove It
Anthropic announced two new models, Claude 4 Opus and Claude Sonnet 4, during its first developer conference in San Francisco on Thursday. The pair will be immediately available to paying Claude subscribers. The new models, which jump the naming convention from 3.7 straight to 4, have a number of strengths, including their ability to reason, plan, and remember the context of conversations over extended periods of time, the company says. Claude 4 Opus is also even better at playing Pokémon than its predecessor. "It was able to work agentically on Pokémon for 24 hours," says Anthropic's chief product officer Mike Krieger in an interview with WIRED.
Claude isn't a great Pokémon player, and that's okay
If Claude Plays Pokémon is supposed to offer a glimpse of AI's future, it's not a very convincing showcase. For the past month and counting, Twitch has watched Anthropic's chatbot struggle to play Pokémon Red. Across multiple runs, Claude has failed to beat the nearly 30 year old game. And yet for David Hershey, the project's lead developer, the showcase has been a success. "I wanted some place where I could understand how Claude handles situations where it needs to work over a very long period of time," Hershey explains to me over a video call.
Rethinking the Relationship between Recurrent and Non-Recurrent Neural Networks: A Study in Sparsity
Hershey, Quincy, Paffenroth, Randy, Pathak, Harsh, Tavener, Simon
Neural networks (NN) can be divided into two broad categories, recurrent and non-recurrent. Both types of neural networks are popular and extensively studied, but they are often treated as distinct families of machine learning algorithms. In this position paper, we argue that there is a closer relationship between these two types of neural networks than is normally appreciated. We show that many common neural network models, such as Recurrent Neural Networks (RNN), Multi-Layer Perceptrons (MLP), and even deep multi-layer transformers, can all be represented as iterative maps. The close relationship between RNNs and other types of NNs should not be surprising. In particular, RNNs are known to be Turing complete, and therefore capable of representing any computable function (such as any other types of NNs), but herein we argue that the relationship runs deeper and is more practical than this. For example, RNNs are often thought to be more difficult to train than other types of NNs, with RNNs being plagued by issues such as vanishing or exploding gradients. However, as we demonstrate in this paper, MLPs, RNNs, and many other NNs lie on a continuum, and this perspective leads to several insights that illuminate both theoretical and practical aspects of NNs.
- North America > United States > Massachusetts > Worcester County > Worcester (0.04)
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- North America > United States > Colorado > Larimer County > Fort Collins (0.04)
- (6 more...)
Optimal Condition Training for Target Source Separation
Tzinis, Efthymios, Wichern, Gordon, Smaragdis, Paris, Roux, Jonathan Le
Recent research has shown remarkable performance in leveraging multiple extraneous conditional and non-mutually exclusive semantic concepts for sound source separation, allowing the flexibility to extract a given target source based on multiple different queries. In this work, we propose a new optimal condition training (OCT) method for single-channel target source separation, based on greedy parameter updates using the highest performing condition among equivalent conditions associated with a given target source. Our experiments show that the complementary information carried by the diverse semantic concepts significantly helps to disentangle and isolate sources of interest much more efficiently compared to single-conditioned models. Moreover, we propose a variation of OCT with condition refinement, in which an initial conditional vector is adapted to the given mixture and transformed to a more amenable representation for target source extraction. We showcase the effectiveness of OCT on diverse source separation experiments where it improves upon permutation invariant models with oracle assignment and obtains state-of-the-art performance in the more challenging task of text-based source separation, outperforming even dedicated text-only conditioned models.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Illinois > Champaign County > Urbana (0.04)
Heterogeneous Target Speech Separation
Tzinis, Efthymios, Wichern, Gordon, Subramanian, Aswin, Smaragdis, Paris, Roux, Jonathan Le
We introduce a new paradigm for single-channel target source separation where the sources of interest can be distinguished using non-mutually exclusive concepts (e.g., loudness, gender, language, spatial location, etc). Our proposed heterogeneous separation framework can seamlessly leverage datasets with large distribution shifts and learn cross-domain representations under a variety of concepts used as conditioning. Our experiments show that training separation models with heterogeneous conditions facilitates the generalization to new concepts with unseen out-of-domain data while also performing substantially higher than single-domain specialist models. Notably, such training leads to more robust learning of new harder source separation discriminative concepts and can yield improvements over permutation invariant training with oracle source selection. We analyze the intrinsic behavior of source separation training with heterogeneous metadata and propose ways to alleviate emerging problems with challenging separation conditions. We release the collection of preparation recipes for all datasets used to further promote research towards this challenging task.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Illinois > Champaign County > Urbana (0.04)
AutoClip: Adaptive Gradient Clipping for Source Separation Networks
Seetharaman, Prem, Wichern, Gordon, Pardo, Bryan, Roux, Jonathan Le
Clipping the gradient is a known approach to improving gradient descent, but requires hand selection of a clipping threshold hyperparameter. We present AutoClip, a simple method for automatically and adaptively choosing a gradient clipping threshold, based on the history of gradient norms observed during training. Experimental results show that applying AutoClip results in improved generalization performance for audio source separation networks. Observation of the training dynamics of a separation network trained with and without AutoClip show that AutoClip guides optimization into smoother parts of the loss landscape. AutoClip is very simple to implement and can be integrated readily into a variety of applications across multiple domains.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Illinois > Cook County > Evanston (0.04)
- Europe > Finland (0.04)
- Europe > Czechia > South Moravian Region > Brno (0.04)
OtoWorld: Towards Learning to Separate by Learning to Move
Ranadive, Omkar, Gasser, Grant, Terpay, David, Seetharaman, Prem
We present OtoWorld, an interactive environment in which agents must learn to listen in order to solve navigational tasks. The purpose of OtoWorld is to facilitate reinforcement learning research in computer audition, where agents must learn to listen to the world around them to navigate. OtoWorld is built on three open source libraries: OpenAI Gym for environment and agent interaction, PyRoomAcoustics for ray-tracing and acoustics simulation, and nussl for training deep computer audition models. OtoWorld is the audio analogue of GridWorld, a simple navigation game. OtoWorld can be easily extended to more complex environments and games. To solve one episode of OtoWorld, an agent must move towards each sounding source in the auditory scene and "turn it off". The agent receives no other input than the current sound of the room. The sources are placed randomly within the room and can vary in number. The agent receives a reward for turning off a source. We present preliminary results on the ability of agents to win at OtoWorld. OtoWorld is open-source and available.
- North America > United States (0.14)
- Europe > Austria (0.14)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.49)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (0.48)
How AI and IoT Are Transforming the Future of the Corporate World
Tight deadlines, fierce competition, and demanding customers are putting an increasing amount of pressure on organizations to improve the quality of their output and optimize the speed at which they deliver it. Emerging technologies such as the internet of things (IoT), artificial intelligence (AI), augmented reality (AR) & virtual reality (VR), big data, and blockchain have helped organizations get better by presenting them with the opportunity to disrupt virtually every business process. IoT solutions and AI solutions are both are unique and carry the potential to digitally transform an enterprise. In fact, it is projected that companies could invest up to $15 trillion in IoT by 2025. Some believe that the Internet of Things offers a potential economic impact of $4 trillion to $11 trillion per year by 2025.