zombie
As AI floods our culture, here's why we must protect human storytelling in games
As AI floods our culture, here's why we must protect human storytelling in games Buying the Zombies, Run! studio wasn't part of my plan, but a post-apocalypse game with stories that make people feel seen pulled me in Don't get Pushing Buttons delivered to your inbox? A few days ago, I clicked a button on my phone to send funds to a company in Singapore and so took ownership of the video game I co-created and am lead writer for: Zombies, Run! I am a novelist, I wrote the bestselling, award-winning The Power, which was turned into an Amazon Prime TV series starring Toni Collette. What on earth am I doing buying a games company?
- Asia > Singapore (0.25)
- Oceania > Australia (0.05)
- North America > United States > California > Los Angeles County > Beverly Hills (0.05)
- (2 more...)
- Media (1.00)
- Leisure & Entertainment > Games > Computer Games (1.00)
- Information Technology > Communications > Social Media (0.71)
- Information Technology > Artificial Intelligence > Games (0.50)
- Information Technology > Communications > Mobile (0.47)
This Quest 3S Bundle Is 50 Off and Includes a Game and Gift Card
Pick up a new Meta headset,, and pocket a $50 gift card in the process. If you've been dreaming of getting into virtual reality but you've been holding out for a good deal, this may be your moment. I spotted a Meta Quest 3S bundle at Best Buy that not only knocks $50 off the normal price, but also includes and a $50 Best Buy digital gift card. That's quite the deal on a product that doesn't often see major discounts, and you can use that gift card to accessorize your new headset. Meta's lineup of stand-alone headsets has slowly improved over the last few years, with frequent updates adding functionality and growing the library of games.
- North America > United States > Oregon > Multnomah County > Portland (0.05)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.05)
- North America > United States > California (0.05)
- (2 more...)
One Life to Learn: Inferring Symbolic World Models for Stochastic Environments from Unguided Exploration
Khan, Zaid, Prasad, Archiki, Stengel-Eskin, Elias, Cho, Jaemin, Bansal, Mohit
Symbolic world modeling requires inferring and representing an environment's transitional dynamics as an executable program. Prior work has focused on largely deterministic environments with abundant interaction data, simple mechanics, and human guidance. We address a more realistic and challenging setting, learning in a complex, stochastic environment where the agent has only "one life" to explore a hostile environment without human guidance. We introduce OneLife, a framework that models world dynamics through conditionally-activated programmatic laws within a probabilistic programming framework. Each law operates through a precondition-effect structure, activating in relevant world states. This creates a dynamic computation graph that routes inference and optimization only through relevant laws, avoiding scaling challenges when all laws contribute to predictions about a complex, hierarchical state, and enabling the learning of stochastic dynamics even with sparse rule activation. To evaluate our approach under these demanding constraints, we introduce a new evaluation protocol that measures (a) state ranking, the ability to distinguish plausible future states from implausible ones, and (b) state fidelity, the ability to generate future states that closely resemble reality. We develop and evaluate our framework on Crafter-OO, our reimplementation of the Crafter environment that exposes a structured, object-oriented symbolic state and a pure transition function that operates on that state alone. OneLife can successfully learn key environment dynamics from minimal, unguided interaction, outperforming a strong baseline on 16 out of 23 scenarios tested. We also test OneLife's planning ability, with simulated rollouts successfully identifying superior strategies. Our work establishes a foundation for autonomously constructing programmatic world models of unknown, complex environments.
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- Asia > Middle East > Jordan (0.04)
- Research Report (0.64)
- Workflow (0.46)
- Leisure & Entertainment > Games > Computer Games (0.93)
- Law (0.68)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Object-Oriented Architecture (0.34)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Scripts & Frames (0.34)
SocialEval: Evaluating Social Intelligence of Large Language Models
Zhou, Jinfeng, Chen, Yuxuan, Shi, Yihan, Zhang, Xuanming, Lei, Leqi, Feng, Yi, Xiong, Zexuan, Yan, Miao, Wang, Xunzhi, Cao, Yaru, Yin, Jianing, Wang, Shuai, Dai, Quanyu, Dong, Zhenhua, Wang, Hongning, Huang, Minlie
LLMs exhibit promising Social Intelligence (SI) in modeling human behavior, raising the need to evaluate LLMs' SI and their discrepancy with humans. SI equips humans with interpersonal abilities to behave wisely in navigating social interactions to achieve social goals. This presents an operational evaluation paradigm: outcome-oriented goal achievement evaluation and process-oriented interpersonal ability evaluation, which existing work fails to address. To this end, we propose SocialEval, a script-based bilingual SI benchmark, integrating outcome- and process-oriented evaluation by manually crafting narrative scripts. Each script is structured as a world tree that contains plot lines driven by interpersonal ability, providing a comprehensive view of how LLMs navigate social interactions. Experiments show that LLMs fall behind humans on both SI evaluations, exhibit prosociality, and prefer more positive social behaviors, even if they lead to goal failure. Analysis of LLMs' formed representation space and neuronal activations reveals that LLMs have developed ability-specific functional partitions akin to the human brain.
- Africa > Rwanda (0.28)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Europe > Austria > Vienna (0.14)
- (12 more...)
- Research Report (1.00)
- Personal > Interview (1.00)
Playing with words: why novelists are becoming video game writers – and vice-versa
I've been working in games for a little more than 15 years, and the main thing I'd say about it at this point is that it's a pretty annoying job to explain at parties. People often say something like, "Oh, I don't really play games," which is surely an odd thing to tell a game designer moments after you've been introduced; I don't really eat croissants, but that's not the first thing I bring up if I meet a patissier. So one of the joys of publishing my first novel last year was the option to sidestep all of that, and say: "Oh, I'm a writer." I wrote a novel; I'm working on another one; job done, the conversation can move on. Nobody says, "Oh, I don't really read books," even though that's at least as likely to be true.
- North America > United States (0.05)
- Asia > India (0.05)
Scientists reveal what zombies would REALLY look like - and say the possessed humans in the Last of Us Season 2 aren't far off
With the second season of The Last of Us returning to our screens, it might be comforting to think that the show is purely fictional. But believe it or not, the show's haunting zombies aren't that far from reality. Real-life'zombie-making' fungi burrow into their host's flesh and manipulate their minds to turn them into hyperactive super spreaders. As it stands, these mind-warping parasites only affect certain insects. However, the stages of infection are eerily similar to those seen in the hit HBO show.
- Health & Medicine (0.71)
- Media > Television (0.70)
- Leisure & Entertainment (0.70)
Thinking agents for zero-shot generalization to qualitatively novel tasks
Miconi, Thomas, McKee, Kevin, Zheng, Yicong, McCaleb, Jed
Thinking agents for zero-shot generalization to qualitatively novel tasks The Obelisk Team Astera Institute Emeryville, USA Abstract Intelligent organisms can solve truly novel problems which they have never encountered before, either in their lifetime or their evolution. An important component of this capacity is the ability to "think", that is, to mentally manipulate objects, concepts and behaviors in order to plan and evaluate possible solutions to novel problems, even without environment interaction. To generate problems that are truly qualitatively novel, while still solvable zero-shot (by mental simulation), we use the combinatorial nature of environments: we train the agent while withholding a specific combination of the environment's elements. The novel test task, based on this combination, is thus guaranteed to be truly novel, while still mentally simulable since the agent has been exposed to each individual element (and their pairwise interactions) during training. We propose a method to train agents endowed with world models to make use their mental simulation abilities, by selecting tasks based on the difference between the agent's pre-thinking and post-thinking performance. When tested on the novel, withheld problem, the resulting agent successfully simulated alternative scenarios and used the resulting information to guide its behavior in the actual environment, solving the novel task in a single real-environment trial (zero-shot). 1 Introduction An important aspect of intelligence is the ability to handle novel problems. While simpler organisms are restricted to problems similar to these they have been exposed to during training, and fare badly when faced Correspondance: Thomas Miconi, thomas.miconi@gmail.comwith An major component of this capacity is the ability to think before acting. By'thinking' 1, that is, by internally manipulating concepts and behaviors and evaluating likely outcomes, agents can tackle novel problems never encountered before, by recombining existing knowledge into new solutions. This ability is perhaps the hallmark of what we think of as truly "intelligent" behavior: it is highly prevalent in humans, but is is debated whether it even exists in non-human animals [Suddendorf and Busby, 2003], including mammals such as rodents [Gillespie et al., 2021] or even great apes [Suddendorf et al., 2009, Os-vath, 2010]. Much work in machine learning has focused on training agents with increasingly complex innate behaviors.
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Europe > Germany > North Rhine-Westphalia > Upper Bavaria > Munich (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
'We don't go to Ravenholm': the story behind Half-Life 2's most iconic level
At the start of Valve's Half-Life 2, the seminal first-person shooter game that turns 20 this month, taciturn scientist Gordon Freeman is trapped within a dystopian cityscape. Armed soldiers patrol the streets, and innocent citizens wander around in a daze, bereft of purpose and future. Dr Wallace Breen, Freeman's former boss at the scientific "research centre" Black Mesa, looks down from giant video screens, espousing the virtues of humankind's benefactors, an alien race known as The Combine. As Freeman stumbles through these first few levels of Half-Life 2, the player acclimatises to the horrible future laid out before them. It's hardly the most cheerful setting, but there are some friendly faces (security guard Barney, Alyx and Eli Vance) and even moments of humour, as Dr Isaac Kleiner's pet, a debeaked face-eating alien called Lamarr, runs amok in his laboratory.
Preference Optimization with Multi-Sample Comparisons
Wang, Chaoqi, Zhao, Zhuokai, Zhu, Chen, Sankararaman, Karthik Abinav, Valko, Michal, Cao, Xuefei, Chen, Zhaorun, Khabsa, Madian, Chen, Yuxin, Ma, Hao, Wang, Sinong
Recent advancements in generative models, particularly large language models (LLMs) and diffusion models, have been driven by extensive pretraining on large datasets followed by post-training. However, current post-training methods such as reinforcement learning from human feedback (RLHF) and direct alignment from preference methods (DAP) primarily utilize single-sample comparisons. These approaches often fail to capture critical characteristics such as generative diversity and bias, which are more accurately assessed through multiple samples. To address these limitations, we introduce a novel approach that extends post-training to include multi-sample comparisons. To achieve this, we propose Multi-sample Direct Preference Optimization (mDPO) and Multi-sample Identity Preference Optimization (mIPO). These methods improve traditional DAP methods by focusing on group-wise characteristics. Empirically, we demonstrate that multi-sample comparison is more effective in optimizing collective characteristics~(e.g., diversity and bias) for generative models than single-sample comparison. Additionally, our findings suggest that multi-sample comparisons provide a more robust optimization framework, particularly for dataset with label noise.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.04)
- Africa > Middle East > Egypt (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
Reinforcement Learning for High-Level Strategic Control in Tower Defense Games
Bergdahl, Joakim, Sestini, Alessandro, Gisslén, Linus
In strategy games, one of the most important aspects of game design is maintaining a sense of challenge for players. Many mobile titles feature quick gameplay loops that allow players to progress steadily, requiring an abundance of levels and puzzles to prevent them from reaching the end too quickly. As with any content creation, testing and validation are essential to ensure engaging gameplay mechanics, enjoyable game assets, and playable levels. In this paper, we propose an automated approach that can be leveraged for gameplay testing and validation that combines traditional scripted methods with reinforcement learning, reaping the benefits of both approaches while adapting to new situations similarly to how a human player would. We test our solution on a popular tower defense game, Plants vs. Zombies. The results show that combining a learned approach, such as reinforcement learning, with a scripted AI produces a higher-performing and more robust agent than using only heuristic AI, achieving a 57.12% success rate compared to 47.95% in a set of 40 levels. Moreover, the results demonstrate the difficulty of training a general agent for this type of puzzle-like game.