Goto

Collaborating Authors

 biome



Appendix Supplementary Material

Neural Information Processing Systems

This also makes the outputs more amenable for training species classifiers on prediction crops, if species are known.


OBSER: Object-Based Sub-Environment Recognition for Zero-Shot Environmental Inference

Choi, Won-Seok, Han, Dong-Sig, Choi, Suhyung, Yang, Hyeonseo, Zhang, Byoung-Tak

arXiv.org Machine Learning

W e present the Object-Based Sub-Environment Recognition (OBSER) framework, a novel Bayesian framework that infers three fundamental relationships between sub-environments and their constituent objects. In the OBSER framework, metric and self-supervised learning models estimate the object distributions of sub-environments on the latent space to compute these measures. Both theoretically and empirically, we validate the proposed framework by introducing the ( ϵ, δ) statistically separable (EDS) function which indicates the alignment of the representation. Our framework reliably performs inference in open-world and photorealistic environments and outperforms scene-based methods in chained retrieval tasks. The OBSER framework enables zero-shot recognition of environments to achieve autonomous environment understanding.


Reinforcement Learning-Enhanced Procedural Generation for Dynamic Narrative-Driven AR Experiences

Joshi, Aniruddha Srinivas

arXiv.org Artificial Intelligence

Procedural Content Generation (PCG) is widely used to create scalable and diverse environments in games. However, existing methods, such as the Wave Function Collapse (WFC) algorithm, are often limited to static scenarios and lack the adaptability required for dynamic, narrative-driven applications, particularly in augmented reality (AR) games. This paper presents a reinforcement learning-enhanced WFC framework designed for mobile AR environments. By integrating environment-specific rules and dynamic tile weight adjustments informed by reinforcement learning (RL), the proposed method generates maps that are both contextually coherent and responsive to gameplay needs. Comparative evaluations and user studies demonstrate that the framework achieves superior map quality and delivers immersive experiences, making it well-suited for narrative-driven AR games. Additionally, the method holds promise for broader applications in education, simulation training, and immersive extended reality (XR) experiences, where dynamic and adaptive environments are critical.


Few-shot Semantic Learning for Robust Multi-Biome 3D Semantic Mapping in Off-Road Environments

Atha, Deegan, Lei, Xianmei, Khattak, Shehryar, Sabel, Anna, Miller, Elle, Noca, Aurelio, Lim, Grace, Edlund, Jeffrey, Padgett, Curtis, Spieler, Patrick

arXiv.org Artificial Intelligence

Off-road environments pose significant perception challenges for high-speed autonomous navigation due to unstructured terrain, degraded sensing conditions, and domain-shifts among biomes. Learning semantic information across these conditions and biomes can be challenging when a large amount of ground truth data is required. In this work, we propose an approach that leverages a pre-trained Vision Transformer (ViT) with fine-tuning on a small (<500 images), sparse and coarsely labeled (<30% pixels) multi-biome dataset to predict 2D semantic segmentation classes. These classes are fused over time via a novel range-based metric and aggregated into a 3D semantic voxel map. We demonstrate zero-shot out-of-biome 2D semantic segmentation on the Yamaha (52.9 mIoU) and Rellis (55.5 mIoU) datasets along with few-shot coarse sparse labeling with existing data for improved segmentation performance on Yamaha (66.6 mIoU) and Rellis (67.2 mIoU). We further illustrate the feasibility of using a voxel map with a range-based semantic fusion approach to handle common off-road hazards like pop-up hazards, overhangs, and water features.


SatCLIP: Global, General-Purpose Location Embeddings with Satellite Imagery

Klemmer, Konstantin, Rolf, Esther, Robinson, Caleb, Mackey, Lester, Rußwurm, Marc

arXiv.org Artificial Intelligence

Geographic location is essential for modeling tasks in fields ranging from ecology to epidemiology to the Earth system sciences. However, extracting relevant and meaningful characteristics of a location can be challenging, often entailing expensive data fusion or data distillation from global imagery datasets. To address this challenge, we introduce Satellite Contrastive Location-Image Pretraining (SatCLIP), a global, general-purpose geographic location encoder that learns an implicit representation of locations from openly available satellite imagery. Trained location encoders provide vector embeddings summarizing the characteristics of any given location for convenient usage in diverse downstream tasks. We show that SatCLIP embeddings, pretrained on globally sampled multi-spectral Sentinel-2 satellite data, can be used in various predictive tasks that depend on location information but not necessarily satellite imagery, including temperature prediction, animal recognition in imagery, and population density estimation. Across tasks, SatCLIP embeddings consistently outperform embeddings from existing pretrained location encoders, ranging from models trained on natural images to models trained on semantic context. SatCLIP embeddings also help to improve geographic generalization. This demonstrates the potential of general-purpose location encoders and opens the door to learning meaningful representations of our planet from the vast, varied, and largely untapped modalities of geospatial data.


Open-World Multi-Task Control Through Goal-Aware Representation Learning and Adaptive Horizon Prediction

Cai, Shaofei, Wang, Zihao, Ma, Xiaojian, Liu, Anji, Liang, Yitao

arXiv.org Artificial Intelligence

We study the problem of learning goal-conditioned policies in Minecraft, a popular, widely accessible yet challenging open-ended environment for developing human-level multi-task agents. We first identify two main challenges of learning such policies: 1) the indistinguishability of tasks from the state distribution, due to the vast scene diversity, and 2) the non-stationary nature of environment dynamics caused by partial observability. To tackle the first challenge, we propose Goal-Sensitive Backbone (GSB) for the policy to encourage the emergence of goal-relevant visual state representations. To tackle the second challenge, the policy is further fueled by an adaptive horizon prediction module that helps alleviate the learning uncertainty brought by the non-stationary dynamics. Experiments on 20 Minecraft tasks show that our method significantly outperforms the best baseline so far; in many of them, we double the performance. Our ablation and exploratory studies then explain how our approach beat the counterparts and also unveil the surprising bonus of zero-shot generalization to new scenes (biomes). We hope our agent could help shed some light on learning goal-conditioned, multi-task agents in challenging, open-ended environments like Minecraft.


Minecraft's big wilderness update arrives June 7th

Engadget

It took several months, but Minecraft's The Wild Update is nearly here. Mojang and Microsoft are releasing The Wild across all platforms on June 7th, and it remains as expansive as promised. The refresh adds two biomes, a mangrove swamp as well as a "deep dark" that hides vicious mobs (such as the Shrieker and Warden) as well as special resources. You can also sail a boat with a chest, so you won't need to leave supplies behind if you're crossing a lake. The upgrade also adds a mud block (made with dirt and water, naturally), a crowd-voted item collector mob (the allay) and a frog that grows from tadpoles.


The future of 'Minecraft' includes swamps, scary monsters and a Game Pass bundle

Engadget

On Saturday, Mojang held its annual Minecraft Live fan convention. As in years past, the event saw the studio detail the future of its immensely popular sandbox game. And if you're a fan of Minecraft, the livestream did not disappoint. The studio kicked off the event with the announcement of The Wild Update. Set to come out sometime in 2022, Mojang promises this latest DLC will change how players explore and interact with the game's overworld.


Novartis Canada inaugure un centre d'innovation Biome à Montréal

#artificialintelligence

The Canadian Biome Digital Innovation Hub, located in the heart of Montreal's dynamic and world-renowned artificial intelligence (AI) community, joins the global network introduced by Novartis in October 2018 with the launch of the first Biome in the digital heartland of Silicon Valley. The network has since expanded to include hubs in the United Kingdom, France, India and now Canada. This announcement follows on from the strategic alliance created last year between Novartis and Mila, the Montreal AI research institute founded in 1993 by Professor Yoshua Bengio. The Biome will be headquartered at Mila where start-ups and entrepreneurs who will become partners within the Biome will have access to the Novartis Canada team so ideas can be converted quickly into solutions for patients. Recognizing the momentum for innovative digital solutions is here and now, partners will have available to them the growing network of Novartis Digital Innovation Hubs in global centres as well as resources to scale up ideas as fast as possible to help patients and healthcare providers in Canada and around the world.