Goto

Collaborating Authors

 labyrinth




Quantifying Generalisation in Imitation Learning

Gavenski, Nathan, Rodrigues, Odinaldo

arXiv.org Artificial Intelligence

Imitation learning benchmarks often lack sufficient variation between training and evaluation, limiting meaningful generalisation assessment. We introduce Labyrinth, a benchmarking environment designed to test generalisation with precise control over structure, start and goal positions, and task complexity. It enables verifiably distinct training, evaluation, and test settings. Labyrinth provides a discrete, fully observable state space and known optimal actions, supporting interpretability and fine-grained evaluation. Its flexible setup allows targeted testing of generalisation factors and includes variants like partial observability, key-and-door tasks, and ice-floor hazards. By enabling controlled, reproducible experiments, Labyrinth advances the evaluation of generalisation in imitation learning and provides a valuable tool for developing more robust agents.



Steganography in Game Actions

Chang, Ching-Chun, Echizen, Isao

arXiv.org Artificial Intelligence

The problem of subliminal communication has been addressed in various forms of steganography, primarily relying on visual, auditory and linguistic media. However, the field faces a fundamental paradox: as the art of concealment advances, so too does the science of revelation, leading to an ongoing evolutionary interplay. This study seeks to extend the boundaries of what is considered a viable steganographic medium. We explore a steganographic paradigm, where hidden information is communicated through the episodes of multiple agents interacting with an environment. Each agent, acting as an encoder, learns a policy to disguise the very existence of hidden messages within actions seemingly directed toward innocent objectives. Meanwhile, an observer, serving as a decoder, learns to associate behavioural patterns with their respective agents despite their dynamic nature, thereby unveiling the hidden messages. The interactions of agents are governed by the framework of multi-agent reinforcement learning and shaped by feedback from the observer. This framework encapsulates a game-theoretic dilemma, wherein agents face decisions between cooperating to create distinguishable behavioural patterns or defecting to pursue individually optimal yet potentially overlapping episodic actions. As a proof of concept, we exemplify action steganography through the game of labyrinth, a navigation task where subliminal communication is concealed within the act of steering toward a destination. The stego-system has been systematically validated through experimental evaluations, assessing its distortion and capacity alongside its secrecy and robustness when subjected to simulated passive and active adversaries.


Reinforcement Learning in Hyperbolic Spaces: Models and Experiments

Jaćimović, Vladimir, Kapić, Zinaid, Crnkić, Aladin

arXiv.org Artificial Intelligence

With the explosive growth of machine learning techniques and applications, new paradigms and models with transformative power are enriching the field. One of the most remarkable trends in recent years is the rapid rise of significance of Riemannian geometry and Lie group theory. The underlying cause is the rising complexity of the data, motivating more sophisticated approaches, thus leading to the wide recognition that a great deal of data sets exhibit an intrinsic curvature. In other words, many data sets are naturally represented or faithfully embedded into non-Euclidean spaces. One apparent example of this kind are rotational motions in robotics.


You Have Thirteen Hours in Which to Solve the Labyrinth: Enhancing AI Game Masters with Function Calling

Song, Jaewoo, Zhu, Andrew, Callison-Burch, Chris

arXiv.org Artificial Intelligence

Developing a consistent and reliable AI game master for text-based games is a challenging task due to the limitations of large language models (LLMs) and the complexity of the game master's role. This paper presents a novel approach to enhance AI game masters by leveraging function calling in the context of the table-top role-playing game "Jim Henson's Labyrinth: The Adventure Game." Our methodology involves integrating game-specific controls through functions, which we show improves the narrative quality and state update consistency of the AI game master. The experimental results, based on human evaluations and unit tests, demonstrate the effectiveness of our approach in enhancing gameplay experience and maintaining coherence with the game state. This work contributes to the advancement of game AI and interactive storytelling, offering insights into the design of more engaging and consistent AI-driven game masters.


Navigating the labyrinth: How generative models tackle complex data sampling

AIHub

The world of artificial intelligence (AI) has recently seen significant advancements in generative models, a type of machine-learning algorithms that "learn" patterns from sets of data in order to generate new, similar sets of data. Generative models are often used for things like drawing images and natural language generation – a famous example are the models used to develop chatGPT. Generative models have had remarkable success in various applications, from image and video generation to composing music and to language modeling. The problem is that we are lacking in theory when it comes to the capabilities and limitations of generative models; understandably, this gap can seriously affect how we develop and use them down the line. One of the main challenges has been the ability to effectively pick samples from complicated data patterns, especially given the limitations of traditional methods when dealing with the kind of high-dimensional and complex data commonly encountered in modern AI applications.


Emergent Braitenberg-style Behaviours for Navigating the ViZDoom `My Way Home' Labyrinth

Bayer, Caleidgh, Smith, Robert J., Heywood, Malcolm I.

arXiv.org Artificial Intelligence

The navigation of complex labyrinths with tens of rooms under visual partially observable state is typically addressed using recurrent deep reinforcement learning architectures. In this work, we show that navigation can be achieved through the emergent evolution of a simple Braitentberg-style heuristic that structures the interaction between agent and labyrinth, i.e. complex behaviour from simple heuristics. To do so, the approach of tangled program graphs is assumed in which programs cooperatively coevolve to develop a modular indexing scheme that only employs 0.8\% of the state space. We attribute this simplicity to several biases implicit in the representation, such as the use of pixel indexing as opposed to deploying a convolutional kernel or image processing operators.


How an AI robot smashed human world record in Labyrinth, a classic marble maze game

FOX News

Researchers have developed an AI robot that can take on physical tasks. You've probably heard of AI winning against humans in games like chess and GO that require intellect. AI is good at crunching numbers and finding patterns. That's something humans are supposed to be better at, right? CLICK TO GET KURT'S FREE CYBERGUY NEWSLETTER WITH SECURITY ALERTS, QUICK VIDEO TIPS, TECH REVIEWS, AND EASY HOW-TO'S TO MAKE YOU SMARTER Researchers at ETH Zurich have created an AI robot with the task of learning how to play the popular wooden labyrinth maze game.

  Country: Europe > Switzerland > Zürich > Zürich (0.33)
  Industry: Media > News (0.32)