Goto

Collaborating Authors

 Instructional Material


MasterClass is 50% off today. It's worth it just for the entertainment

PCWorld

When you purchase through links in our articles, we may earn a small commission. MasterClass is 50% off today. Until May 10th, MasterClass annual plans start at $60/year. It's great for casual learners who want high-quality, entertaining courses from big names. With the job market being what it is, there's never been a better time to learn new skills (or brush up on old ones).


This piano app listens and corrects you--and gives you 5 years to master it

PCWorld

When you purchase through links in our articles, we may earn a small commission. A 5-year flowkey Classic Plan is $99.99 (MSRP $899). Trying to teach yourself piano usually breaks down at the same point: you can follow along with sheet music or a video, but you can't verify if you're doing it right. And, honestly, who wants to take formal lessons every week? Instead, there's an app for that: flowkey, and it turns your keyboard or piano into something closer to an interactive lesson setup.


OpenAI Really Wants Codex to Shut Up About Goblins

WIRED

"Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant," reads OpenAI's coding agent instructions. OpenAI has a goblin problem. Instructions designed to guide the behavior of the company's latest model as it writes code have been revealed to include a line, repeated several times, that specifically forbids it from randomly mentioning an assortment of mythical and real creatures. "Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user's query," read instructions in Codex CLI, a command-line tool for using AI to generate code. It is unclear why OpenAI felt compelled to spell this out for Codex --or indeed why its models might want to discuss goblins or pigeons in the first place.


Get Office 2024 & training courses for just 114

PCWorld

When you purchase through links in our articles, we may earn a small commission. Get Microsoft Office 2024 Home & Business plus an 8-course training bundle for hundreds off. Many people use Microsoft Office every day--but not always to its full potential. This bundle pairs Microsoft Office 2024 Home & Business with an 8-course training program designed to close that gap. That includes topics like Excel formulas, workflow efficiency, and even how to integrate tools like ChatGPT into your daily work.


The Download: supercharged scams and studying AI healthcare

MIT Technology Review

Plus: DeepSeek has unveiled its long-awaited new AI model. When ChatGPT was released in late 2022, it showed how easily generative AI could create human-like text. This quickly caught the eye of cybercriminals, who began using LLMs to compose malicious emails. Since then, they've adopted AI for everything from turbocharged phishing and hyperrealistic deepfakes to automated vulnerability scans. Many organizations are now struggling to cope with the sheer volume of cyberattacks. AI is making them faster, cheaper, and easier to carry out, a problem set to worsen as more cybercriminals adopt these tools--and their capabilities improve.


At 'AI Coachella,' Stanford Students Line Up to Learn From Silicon Valley Royalty

WIRED

CS 153 has gone viral on the Palo Alto campus--and on X. Not everyone is happy about it. As thousands of influencers descended on southern California earlier this month for the annual Coachella Music Festival, a very Silicon Valley program dubbed "AI Coachella" was taking shape a few hundred miles north in Palo Alto. The class, CS 153, is one of Stanford's buzziest offerings this semester, and like the music festival, it features a star-studded lineup of celebrities--in this case, not pop artists, but Big Tech CEOs. The course is co-taught by Anjney Midha, a former Andreessen Horowitz general partner, and Michael Abbott, Apple's former VP of engineering for cloud services.


Doubly Outlier-Robust Online Infinite Hidden Markov Model

Yiu, Horace, Sánchez-Betancourt, Leandro, Cartea, Álvaro, Duran-Martin, Gerardo

arXiv.org Machine Learning

We derive a robust update rule for the online infinite hidden Markov model (iHMM) for when the streaming data contains outliers and the model is misspecified. Leveraging recent advances in generalised Bayesian inference, we define robustness via the posterior influence function (PIF), and provide conditions under which the online iHMM has bounded PIF. Imposing robustness inevitably induces an adaptation lag for regime switching. Our method, which is called Batched Robust iHMM (BR-iHMM), balances adaptivity and robustness with two additional tunable parameters. Across limit order book data, hourly electricity demand, and a synthetic high-dimensional linear system, BR-iHMM reduces one-step-ahead forecasting error by up to 67% relative to competing online Bayesian methods. Together with theoretical guarantees of bounded PIF, our results highlight the practicality of our approach for both forecasting and interpretable online learning.


An Optimal Sauer Lemma Over $k$-ary Alphabets

Hanneke, Steve, Meng, Qinglin, Moran, Shay, Shaeiri, Amirreza

arXiv.org Machine Learning

The Sauer-Shelah-Perles Lemma is a cornerstone of combinatorics and learning theory, bounding the size of a binary hypothesis class in terms of its Vapnik-Chervonenkis (VC) dimension. For classes of functions over a $k$-ary alphabet, namely the multiclass setting, the Natarajan dimension has long served as an analogue of VC dimension, yet the corresponding Sauer-type bounds are suboptimal for alphabet sizes $k>2$. In this work, we establish a sharp Sauer inequality for multiclass and list prediction. Our bound is expressed in terms of the Daniely--Shalev-Shwartz (DS) dimension, and more generally with its extension, the list-DS dimension -- the combinatorial parameters that characterize multiclass and list PAC learnability. Our bound is tight for every alphabet size $k$, list size $\ell$, and dimension value, replacing the exponential dependence on $\ell$ in the Natarajan-based bound by the optimal polynomial dependence, and improving the dependence on $k$ as well. Our proof uses the polynomial method. In contrast to the classical VC case, where several direct combinatorial proofs are known, we are not aware of any purely combinatorial proof in the DS setting. This motivates several directions for future research, which are discussed in the paper. As consequences, we obtain improved sample complexity upper bounds for list PAC learning and for uniform convergence of list predictors, sharpening the recent results of Charikar et al.~(STOC~2023), Hanneke et al.~(COLT~2024), and Brukhim et al.~(NeurIPS~2024).


Deep Learning for Sequential Decision Making under Uncertainty: Foundations, Frameworks, and Frontiers

Buyuktahtakin, I. Esra

arXiv.org Machine Learning

Artificial intelligence (AI) is moving increasingly beyond prediction to support decisions in complex, uncertain, and dynamic environments. This shift creates a natural intersection with operations research and management sciences (OR/MS), which have long offered conceptual and methodological foundations for sequential decision-making under uncertainty. At the same time, recent advances in deep learning, including feedforward neural networks, LSTMs, transformers, and deep reinforcement learning, have expanded the scope of data-driven modeling and opened new possibilities for large-scale decision systems. This tutorial presents an OR/MS-centered perspective on deep learning for sequential decision-making under uncertainty. Its central premise is that deep learning is valuable not as a replacement for optimization, but as a complement to it. Deep learning brings adaptability and scalable approximation, whereas OR/MS provides the structural rigor needed to represent constraints, recourse, and uncertainty. The tutorial reviews key decision-making foundations, connects them to the major neural architectures in modern AI, and discusses leading approaches to integrating learning and optimization. It also highlights emerging impact in domains such as supply chains, healthcare and epidemic response, agriculture, energy, and autonomous operations. More broadly, it frames these developments as part of a wider transition from predictive AI toward decision-capable AI and highlights the role of OR/MS in shaping the next generation of integrated learning--optimization systems.


Is Schoolwork Optional Now?

The Atlantic - Technology

Education is on the verge of becoming fully automated. William Liu is grateful that he finished high school when he did. If the latest AI tools had been around then, he told me, he might have been tempted to use them to do his homework. Liu, now a sophomore at Stanford, finished high school all the way back in 2024. "I have a younger sibling who is just graduating high school," he said.