roper
- North America > United States > Maryland (0.04)
- North America > United States > Arizona > Maricopa County > Phoenix (0.04)
- Europe > Sweden > Stockholm > Stockholm (0.04)
- (2 more...)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Data Science > Data Mining > Big Data (0.47)
- North America > United States > New York (0.04)
- North America > United States > Maryland > Baltimore (0.04)
- North America > United States > Arizona > Maricopa County > Phoenix (0.04)
- (3 more...)
- North America > United States > Maryland (0.04)
- North America > United States > Arizona > Maricopa County > Phoenix (0.04)
- Europe > Sweden > Stockholm > Stockholm (0.04)
- (2 more...)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Data Science > Data Mining > Big Data (0.47)
- North America > United States > New York (0.04)
- North America > United States > Maryland > Baltimore (0.04)
- North America > United States > Arizona > Maricopa County > Phoenix (0.04)
- (3 more...)
Tracking Most Significant Shifts in Nonparametric Contextual Bandits
We study nonparametric contextual bandits where Lipschitz mean reward functions may change over time. We first establish the minimax dynamic regret rate in this less understood setting in terms of number of changes $L$ and total-variation $V$, both capturing all changes in distribution over context space, and argue that state-of-the-art procedures are suboptimal in this setting. Next, we tend to the question of an adaptivity for this setting, i.e. achieving the minimax rate without knowledge of $L$ or $V$. Quite importantly, we posit that the bandit problem, viewed locally at a given context $X_t$, should not be affected by reward changes in other parts of context space $\cal X$. We therefore propose a notion of change, which we term experienced significant shifts, that better accounts for locality, and thus counts considerably less changes than $L$ and $V$. Furthermore, similar to recent work on non-stationary MAB (Suk & Kpotufe, 2022), experienced significant shifts only count the most significant changes in mean rewards, e.g., severe best-arm changes relevant to observed contexts. Our main result is to show that this more tolerant notion of change can in fact be adapted to.
The Strangely Believable Tale of a Mythical Rogue Drone
Did you hear about the Air Force AI drone that went rogue and attacked its operators inside a simulation? The cautionary tale was told by Colonel Tucker Hamilton, chief of AI test and operations at the US Air Force, during a speech at an aerospace and defense event in London late last month. It apparently involved taking the kind of learning algorithm that has been used to train computers to play video games and board games like Chess and Go and using it to train a drone to hunt and destroy surface-to-air missiles. "At times, the human operator would tell it not to kill that threat, but it got its points by killing that threat," Hamilton was widely reported as telling the audience in London. It sounds like just the sort of thing AI experts have begun warning that increasingly clever and maverick algorithms might do.
- Leisure & Entertainment > Games (1.00)
- Government > Military > Air Force (0.83)
- Information Technology > Artificial Intelligence > Robots > Autonomous Vehicles > Drones (0.40)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.34)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.33)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.33)