Goto

Collaborating Authors

 Computer Based Training


EDU Unlimited turns online learning into a one-time 20 purchase instead of ongoing tuition costs

PCWorld

When you purchase through links in our articles, we may earn a small commission. TL;DR: Score lifetime access to EDU Unlimited for just $19.97 through May 31 (MSRP $600) and unlock 1,000+ online courses across tech, business, creative skills, and more with a single payment. Online learning can get expensive fast, especially when a single course or boot camp can run into the hundreds or even thousands of dollars. EDU Unlimited by StackSkills flips that model by giving you one-time lifetime access to a massive library of 1,000+ courses across a wide range of subjects for just $19.97 during this limited-time offer (MSRP $600). From coding and marketing to creative hobbies like photography or design, StackSkills lets you build your dream skill-set at your own pace, without the pressure.


MasterClass is 50% off today. It's worth it just for the entertainment

PCWorld

When you purchase through links in our articles, we may earn a small commission. MasterClass is 50% off today. Until May 10th, MasterClass annual plans start at $60/year. It's great for casual learners who want high-quality, entertaining courses from big names. With the job market being what it is, there's never been a better time to learn new skills (or brush up on old ones).


A single algorithm for both restless and rested rotting bandits

Seznec, Julien, Ménard, Pierre, Lazaric, Alessandro, Valko, Michal

arXiv.org Machine Learning

In many application domains (e.g., recommender systems, intelligent tutoring systems), the rewards associated to the actions tend to decrease over time. This decay is either caused by the actions executed in the past (e.g., a user may get bored when songs of the same genre are recommended over and over) or by an external factor (e.g., content becomes outdated). These two situations can be modeled as specific instances of the rested and restless bandit settings, where arms are rotting (i.e., their value decrease over time). These problems were thought to be significantly different, since Levine et al. (2017) showed that state-of-the-art algorithms for restless bandit perform poorly in the rested rotting setting. In this paper, we introduce a novel algorithm, Rotting Adaptive Window UCB (RAW-UCB), that achieves near-optimal regret in both rotting rested and restless bandit, without any prior knowledge of the setting (rested or restless) and the type of non-stationarity (e.g., piece-wise constant, bounded variation). This is in striking contrast with previous negative results showing that no algorithm can achieve similar results as soon as rewards are allowed to increase. We confirm our theoretical findings on a number of synthetic and dataset-based experiments.