Goto

Collaborating Authors

 kapoor


Loyalty Is Dead in Silicon Valley

WIRED

Founders used to be wedded to their companies. Now, anyone can be lured away for the right price. Since the middle of last year, there have been at least three major AI "acqui-hires" in Silicon Valley. Meta invested more than $14 billion in Scale AI and brought on its CEO, Alexandr Wang; Google spent a cool $2.4 billion to license Windsurf's technology and fold its cofounders and research teams into DeepMind; and Nvidia wagered $20 billion on Groq's inference technology and hired its CEO and other staffers. The frontier AI labs, meanwhile, have been playing a high stakes and seemingly never-ending game of talent musical chairs.


Developers Say GPT-5 Is a Mixed Bag

WIRED

When OpenAI launched GPT-5 last week, it told software engineers the model was designed to be a "true coding collaborator" that excels at generating high-quality code and performing agentic, or automated, software tasks. While the company didn't say so explicitly, OpenAI appeared to be taking direct aim at Anthropic's Claude Code, which has quickly become many developers' favored tool for AI-assisted coding. But developers tell WIRED that GPT-5 has been a mixed bag so far. It shines at technical reasoning and planning coding tasks, but some say that Anthropic's newest Opus and Sonnet reasoning models still produce better code. Depending on which version of GPT-5 developers are using--low, medium, or high verbosity--the model can be more elaborative, which sometimes leads it to generate unnecessary or redundant lines of code.


Guidance for Intra-cardiac Echocardiography Manipulation to Maintain Continuous Therapy Device Tip Visibility

Huh, Jaeyoung, Kapoor, Ankur, Kim, Young-Ho

arXiv.org Artificial Intelligence

Intra-cardiac Echocardiography (ICE) plays a critical role in Electrophysiology (EP) and Structural Heart Disease (SHD) interventions by providing real-time visualization of intracardiac structures. However, maintaining continuous visibility of the therapy device tip remains a challenge due to frequent adjustments required during manual ICE catheter manipulation. To address this, we propose an AI-driven tracking model that estimates the device tip incident angle and passing point within the ICE imaging plane, ensuring continuous visibility and facilitating robotic ICE catheter control. A key innovation of our approach is the hybrid dataset generation strategy, which combines clinical ICE sequences with synthetic data augmentation to enhance model robustness. We collected ICE images in a water chamber setup, equipping both the ICE catheter and device tip with electromagnetic (EM) sensors to establish precise ground-truth locations. Synthetic sequences were created by overlaying catheter tips onto real ICE images, preserving motion continuity while simulating diverse anatomical scenarios. The final dataset consists of 5,698 ICE-tip image pairs, ensuring comprehensive training coverage. Our model architecture integrates a pretrained ultrasound (US) foundation model, trained on 37.4M echocardiography images, for feature extraction. A transformer-based network processes sequential ICE frames, leveraging historical passing points and incident angles to improve prediction accuracy. Experimental results demonstrate that our method achieves 3.32 degree entry angle error, 12.76 degree rotation angle error. This AI-driven framework lays the foundation for real-time robotic ICE catheter adjustments, minimizing operator workload while ensuring consistent therapy device visibility. Future work will focus on expanding clinical datasets to further enhance model generalization.


Here's why we need to start thinking of AI as "normal"

MIT Technology Review

Instead, according to the researchers, AI is a general-purpose technology whose application might be better compared to the drawn-out adoption of electricity or the internet than to nuclear weapons--though they concede this is in some ways a flawed analogy. The core point, Kapoor says, is that we need to start differentiating between the rapid development of AI methods--the flashy and impressive displays of what AI can do in the lab--and what comes from the actual applications of AI, which in historical examples of other technologies lag behind by decades. "Much of the discussion of AI's societal impacts ignores this process of adoption," Kapoor told me, "and expects societal impacts to occur at the speed of technological development." In other words, the adoption of useful artificial intelligence, in his view, will be less of a tsunami and more of a trickle. In the essay, the pair make some other bracing arguments: terms like "superintelligence" are so incoherent and speculative that we shouldn't use them; AI won't automate everything but will birth a category of human labor that monitors, verifies, and supervises AI; and we should focus more on AI's likelihood to worsen current problems in society than the possibility of it creating new ones.


Generative AI Hype Feels Inescapable. Tackle It Head On With Education

WIRED

Arvind Narayanan, a computer science professor at Princeton University, is best known for calling out the hype surrounding artificial intelligence in his Substack, AI Snake Oil, written with PhD candidate Sayash Kapoor. The two authors recently released a book based on their popular newsletter about AI's shortcomings. But don't get it twisted--they aren't against using new technology. "It's easy to misconstrue our message as saying that all of AI is harmful or dubious," Narayanan says. He makes clear, during a conversation with WIRED, that his rebuke is not aimed at the software per say, but rather the culprits who continue to spread misleading claims about artificial intelligence.


Logically Constrained Robotics Transformers for Enhanced Perception-Action Planning

Kapoor, Parv, Vemprala, Sai, Kapoor, Ashish

arXiv.org Artificial Intelligence

With the advent of large foundation model based planning, there is a dire need to ensure their output aligns with the stakeholder's intent. When these models are deployed in the real world, the need for alignment is magnified due to the potential cost to life and infrastructure due to unexpected faliures. Temporal Logic specifications have long provided a way to constrain system behaviors and are a natural fit for these use cases. In this work, we propose a novel approach to factor in signal temporal logic specifications while using autoregressive transformer models for trajectory planning. We also provide a trajectory dataset for pretraining and evaluating foundation models. Our proposed technique acheives 74.3 % higher specification satisfaction over the baselines.


AI meme wars hit India election campaign, testing social platforms

Al Jazeera

Bengaluru, India – On February 20, India's chief opposition party, the Indian National Congress (INC), uploaded a video parodying Prime Minister Narendra Modi on Instagram that has amassed over 1.5 million views. It is a short clip from a new Hindi music album named "Chor" (thief), where Modi's digital likeness is grafted onto the lead singer. The song's lyrics were humorously reworked to describe a thief's – in this case, a business tycoon's – attempt to steal, and Modi handing over coal mines, ports, power lines and ultimately, the country. The video isn't hyperrealistic, but a pithy AI meme that uses Modi's voice and face clones, to drive home the nagging criticism of his close ties to Indian business moguls. That same day, the official Bharatiya Janata Party (BJP) handle on Instagram, with over seven million followers, uploaded its own video.


Supercharging Cassandra NoSQL For Machine Learning

#artificialintelligence

DataStax, the driving force behind the ongoing development of and commercialization of the open source NoSQL Apache Cassandra database, had been in business for nine years in 2019 when it made a hard shift to the cloud. The company had already been working with organizations whose businesses already stretched into hybrid and multicloud environments, but its "cloud first" strategy was designed to make it easier for the company to grow and easier for customers to consume Cassandra. This cloud first approach is shared by many established and startup software companies alike. Back then, DataStax had just unveiled Constellation, a cloud data platform for developers to build newer application and operations teams to manage them, with the first offering on the platform being DataStax Apache Cassandra as a Service. A year later, the company announced its Astra database cloud service and in 2021 released a new version of Astra for serverless deployments. The transition to the cloud was important in making it easier for enterprises to use Cassandra, according to Ed Anuff, chief product officer at DataStax.


The reproducibility issues that haunt health-care AI

#artificialintelligence

The use of artificial intelligence in medicine is growing rapidly.Credit: ktsimage/Getty Each day, around 350 people in the United States die from lung cancer. Many of those deaths could be prevented by screening with low-dose computed tomography (CT) scans. But scanning millions of people would produce millions of images, and there aren't enough radiologists to do the work. Even if there were, specialists regularly disagree about whether images show cancer or not. The 2017 Kaggle Data Science Bowl set out to test whether machine-learning algorithms could fill the gap.


Sloppy Use of Machine Learning is Causing a 'Reproducibility Crisis' in Science

WIRED

History shows civil wars to be among the messiest, most horrifying of human affairs. So Princeton professor Arvind Narayanan and his PhD student Sayash Kapoor got suspicious last year when they discovered a strand of political science research claiming to predict when a civil war will break out with more than 90 percent accuracy, thanks to artificial intelligence. A series of papers described astonishing results from using machine learning, the technique beloved by tech giants that underpins modern AI. Applying it to data such as a country's gross domestic product and unemployment rate was said to beat more conventional statistical methods at predicting the outbreak of civil war by almost 20 percentage points. Yet when the Princeton researchers looked more closely, many of the results turned out to be a mirage.