Goto

Collaborating Authors

 hype


Join Our Next Livestream: The War Machine

WIRED

The app reads your email inbox and your meeting calendar, then gives you a short audio summary. It can help you spend less time scrolling, but of course, there are privacy drawbacks to consider.



The Download: cut through AI coding hype, and biotech trends to watch

MIT Technology Review

AI coding is now everywhere. But not everyone is convinced. Depending who you ask, AI-powered coding is either giving software developers an unprecedented productivity boost or churning out masses of poorly designed code that saps their attention and sets software projects up for serious long term-maintenance problems. The problem is right now, it's not easy to know which is true. As tech giants pour billions into large language models (LLMs), coding has been touted as the technology's killer app. Executives enamored with the potential are pushing engineers to lean into an AI-powered future.


AI-Powered Dating Is All Hype. IRL Cruising Is the Future

WIRED

AI-Powered Dating Is All Hype. Dating apps and AI companies have been touting bot wingmen for months. But the future might just be good old-fashioned meet-cutes. I am, admittedly, a big flirt. I love everything about the exchange of getting to know another person.


HYPE: A Benchmark for Human eYe Perceptual Evaluation of Generative Models

Neural Information Processing Systems

Generative models often use human evaluations to measure the perceived quality of their outputs. Automated metrics are noisy indirect proxies, because they rely on heuristics or pretrained embeddings. However, up until now, direct human evaluation strategies have been ad-hoc, neither standardized nor validated. Our work establishes a gold standard human benchmark for generative realism. We construct Human eYe Perceptual Evaluation (HYPE) a human benchmark that is (1) grounded in psychophysics research in perception, (2) reliable across different sets of randomly sampled outputs from a model, (3) able to produce separable model performances, and (4) efficient in cost and time. We introduce two variants: one that measures visual perception under adaptive time constraints to determine the threshold at which a model's outputs appear real (e.g.


Can AI really help us discover new materials?

MIT Technology Review

Can AI really help us discover new materials? Judging from headlines and social media posts in recent years, one might reasonably assume that AI is going to fix the power grid, cure the world's diseases, and finish my holiday shopping for me. This week, we published a new package called Hype Correction . The collection of stories takes a look at how the world is starting to reckon with the reality of what AI can do, and what's just fluff. One of my favorite stories in that package comes from my colleague David Rotman, who took a hard look at AI for materials research . AI could transform the process of discovering new materials--innovation that could be especially useful in the world of climate tech, which needs new batteries, semiconductors, magnets, and more.


A brief history of Sam Altman's hype

MIT Technology Review

Here's how pinning a utopian vision for AI on LLMs kicked off the hype cycle that's causing fears of a bubble today. Each time you've heard a borderline outlandish idea of what AI will be capable of, it often turns out that Sam Altman was, if not the first to articulate it, at least the most persuasive and influential voice behind it. For more than a decade he has been known in Silicon Valley as a world-class fundraiser and persuader. OpenAI's early releases around 2020 set the stage for a mania around large language models, and the launch of ChatGPT in November 2022 granted Altman a world stage on which to present his new thesis: that these models mirror human intelligence and could swing the doors open to a healthier and wealthier techno-utopia. Throughout, Altman's words have set the agenda. He has framed a prospective superintelligent AI as either humanistic or catastrophic, depending on what effect he was hoping to create, what he was raising money for, or which tech giant seemed like his most formidable competitor at the moment.


HYPE: Hybrid Planning with Ego Proposal-Conditioned Predictions

Yu, Hang, Jordan, Julian, Schmidt, Julian, Lindner, Silvan, Canevaro, Alessandro, Stork, Wilhelm

arXiv.org Artificial Intelligence

Safe and interpretable motion planning in complex urban environments needs to reason about bidirectional multi-agent interactions. This reasoning requires to estimate the costs of potential ego driving maneuvers. Many existing planners generate initial trajectories with sampling-based methods and refine them by optimizing on learned predictions of future environment states, which requires a cost function that encodes the desired vehicle behavior. Designing such a cost function can be very challenging, especially if a wide range of complex urban scenarios has to be considered. We propose HYPE: HYbrid Planning with Ego proposal-conditioned predictions, a planner that integrates multimodal trajectory proposals from a learned proposal model as heuristic priors into a Monte Carlo Tree Search (MCTS) refinement. To model bidirectional interactions, we introduce an ego-conditioned occupancy prediction model, enabling consistent, scene-aware reasoning. Our design significantly simplifies cost function design in refinement by considering proposal-driven guidance, requiring only minimalistic grid-based cost terms. Evaluations on large-scale real-world benchmarks nuPlan and DeepUrban show that HYPE effectively achieves state-of-the-art performance, especially in safety and adaptability.



Hype or not? Formalizing Automatic Promotional Language Detection in Biomedical Research

Batalo, Bojan, Shimomoto, Erica K., Millar, Neil

arXiv.org Artificial Intelligence

In science, promotional language ('hype') is increasing and can undermine objective evaluation of evidence, impede research development, and erode trust in science. In this paper, we introduce the task of automatic detection of hype, which we define as hyperbolic or subjective language that authors use to glamorize, promote, embellish, or exaggerate aspects of their research. We propose formalized guidelines for identifying hype language and apply them to annotate a portion of the National Institutes of Health (NIH) grant application corpus. We then evaluate traditional text classifiers and language models on this task, comparing their performance with a human baseline. Our experiments show that formalizing annotation guidelines can help humans reliably annotate candidate hype adjectives and that using our annotated dataset to train machine learning models yields promising results. Our findings highlight the linguistic complexity of the task, and the potential need for domain knowledge and temporal awareness of the facts. While some linguistic works address hype detection, to the best of our knowledge, we are the first to approach it as a natural language processing task.