ranger
- Europe > Switzerland > Zürich > Zürich (0.05)
- North America > United States > California > Los Angeles County > Los Angeles (0.04)
- North America > Canada (0.04)
- (2 more...)
- Transportation > Infrastructure & Services (0.68)
- Leisure & Entertainment > Games (0.47)
- Europe > Switzerland > Zürich > Zürich (0.05)
- North America > United States > California > Los Angeles County > Los Angeles (0.04)
- North America > Canada (0.04)
- (2 more...)
- Transportation > Infrastructure & Services (0.68)
- Leisure & Entertainment > Games (0.47)
Choosing Well Your Opponents: How to Guide the Synthesis of Programmatic Strategies
Moraes, Rubens O., Aleixo, David S., Ferreira, Lucas N., Lelis, Levi H. S.
This paper introduces Local Learner (2L), an algorithm for providing a set of reference strategies to guide the search for programmatic strategies in two-player zero-sum games. Previous learning algorithms, such as Iterated Best Response (IBR), Fictitious Play (FP), and Double-Oracle (DO), can be computationally expensive or miss important information for guiding search algorithms. 2L actively selects a set of reference strategies to improve the search signal. We empirically demonstrate the advantages of our approach while guiding a local search algorithm for synthesizing strategies in three games, including MicroRTS, a challenging real-time strategy game. Results show that 2L learns reference strategies that provide a stronger search signal than IBR, FP, and DO. We also simulate a tournament of MicroRTS, where a synthesizer using 2L outperformed the winners of the two latest MicroRTS competitions, which were programmatic strategies written by human programmers.
- North America > Canada > Alberta (0.14)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- South America > Brazil (0.04)
Ranger: A Toolkit for Effect-Size Based Multi-Task Evaluation
Sertkan, Mete, Althammer, Sophia, Hofstätter, Sebastian
In this paper, we introduce Ranger - a toolkit to facilitate the easy use of effect-size-based meta-analysis for multi-task evaluation in NLP and IR. We observed that our communities often face the challenge of aggregating results over incomparable metrics and scenarios, which makes conclusions and take-away messages less reliable. With Ranger, we aim to address this issue by providing a task-agnostic toolkit that combines the effect of a treatment on multiple tasks into one statistical evaluation, allowing for comparison of metrics and computation of an overall summary effect. Our toolkit produces publication-ready forest plots that enable clear communication of evaluation results over multiple tasks. Our goal with the ready-to-use Ranger toolkit is to promote robust, effect-size-based evaluation and improve evaluation standards in the community. We provide two case studies for common IR and NLP settings to highlight Ranger's benefits.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- North America > Canada (0.04)
- (3 more...)
Can AI and Machine Learning Help Park Rangers Prevent Poaching?
BRIAN KENNY: Artificial intelligence or AI for short is certainly creating a lot of buzz these days. And although it may seem like this amorphous thing that's somewhere off in our future, it's already very much in our midst. Navigation apps have turned printed maps into relics. Alexa, knows what you need from the grocery store before you do. Google Nest has the house at just the right temperature before you roll out from under the covers. And this is all great, but now you have to wonder if this intro is written by me or chat GPT. Which raises an important question.
- Asia > Vietnam (0.14)
- Asia > Cambodia (0.06)
- North America > United States > Rhode Island (0.04)
- (4 more...)
- Government (0.95)
- Law Enforcement & Public Safety (0.68)
Fairness for Workers Who Pull the Arms: An Index Based Policy for Allocation of Restless Bandit Tasks
Biswas, Arpita, Killian, Jackson A., Diaz, Paula Rodriguez, Ghosh, Susobhan, Tambe, Milind
Motivated by applications such as machine repair, project monitoring, and anti-poaching patrol scheduling, we study intervention planning of stochastic processes under resource constraints. This planning problem has previously been modeled as restless multi-armed bandits (RMAB), where each arm is an intervention-dependent Markov Decision Process. However, the existing literature assumes all intervention resources belong to a single uniform pool, limiting their applicability to real-world settings where interventions are carried out by a set of workers, each with their own costs, budgets, and intervention effects. In this work, we consider a novel RMAB setting, called multi-worker restless bandits (MWRMAB) with heterogeneous workers. The goal is to plan an intervention schedule that maximizes the expected reward while satisfying budget constraints on each worker as well as fairness in terms of the load assigned to each worker. Our contributions are two-fold: (1) we provide a multi-worker extension of the Whittle index to tackle heterogeneous costs and per-worker budget and (2) we develop an index-based scheduling policy to achieve fairness. Further, we evaluate our method on various cost structures and show that our method significantly outperforms other baselines in terms of fairness without sacrificing much in reward accumulated.
- Europe > United Kingdom > England > Greater London > London (0.04)
- North America > United States > Massachusetts (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Protecting Endangered Animals With AI
While AI is making a big impact in pretty much every business area, it is also important to note some of the ways it is helping to save our planet. Conservationists are increasingly turning to AI as an innovative solution to overcome various biodiversity crises. It helps protect a diverse set of species and assists law enforcement agents who are often short-staffed, and it is almost impossible for them to cover a vast stretch of land, such as a national park. This is one of the reasons why AI is so useful because it can take a lot of the time-consuming work off the shoulders of human workers, such as constantly monitoring surveillance data. In this article, we will talk about some of the interesting ways AI is being used to protect endangered species and the data annotation that is required to create it.
- Africa > Zambia (0.18)
- North America > United States (0.16)
Indian conservation rangers use artificial intelligence to protect 'vulnerable' tigers from poachers
Conservation rangers in India are using the power of artificial intelligence to protect the country's vulnerable tigers from poachers and other perils. Most of the nation's tigers - believed to number about 2,967 in total - live in one of 51 tiger reserves that cover a large area stretching 45,900 miles. Quantifying the beautiful creatures isn't always easy and the same can be said for protecting them, with deaths resulting from poaching, seizures, accidents or conflicts with humans totaling about 300 over the last four years. Most of the India's tigers - believed to number about 2,967 in total - live in one of 51 tiger reserves that cover a large area stretching 45,900 miles. AI is helping conservation rangers to track the animals' movements AVI Foundation has developed an AI that can use data collected by cameras and rangers, in combination with satellite data and information from the local population.
- Asia > India > Maharashtra (0.05)
- Asia > India > Madhya Pradesh (0.05)
Introducing random forests in R
In this post, I will present how to use random forests in classification, a prediction technique consisting in generating a set of trees (hence, a forest) bootstrapping the features used in each tree. We do this to obtain trees that are not necessarily using the strongest predictors at the beginning. I will test this technique in a LoanDefaults dataset to predict which customers will default the paying of a loan in a specific month. This dataset has two interesting features: the number of positive cases is much smaller than the negatives and requires some preprocessing of the existing features. I will be using the ranger (RANdom forest GEneRator) package, skimr to get a summary of data, rpart and rpart.plot to generate an alternative decision tree model, BAdatasets to access the dataset, tidymodels for prediction workflow facilities and forcats for the variable importance plot.