Goto

Collaborating Authors

 cursor


Apple's Next Chapter, SpaceX and Cursor Strike a Deal, and Palantir's Controversial Manifesto

WIRED

In this week's episode of, we talk about Tim Cook's legacy as CEO at Apple and what his long-rumored departure means for the future of one of the world's biggest companies. They also go into the reasoning behind SpaceX and Cursor's surprising deal, and why Palantir's self-published manifesto drew a lot of heat online. Also, we discuss why some conspiracy theorists are leaving Trump's side, and how a scammer created an AI-generated woman to attract and grift MAGA men. Tim Cook's Legacy Is Turning Apple Into a Subscription This Scammer Used an AI-Generated MAGA Girl to Grift'Super Dumb' Men Write to us at [email protected] . You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how: If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link . Zoë, Leah, and I have really enjoyed being your new hosts these past few weeks, and we want to hear from you. If you like the show and have a minute, please leave us a review in the podcast or app of your choice. It really helps us reach more people, and for any questions and comments, you can always reach us at [email protected] . I missed you so much. And I missed you the exact same amount. I'm going to go away more often. Absence makes the heart go fonder, as we all know, and I'm thrilled to be here. This week on the show, we're saying goodbye to Apple CEO, Tim Cook, who announced that he is stepping down from the top gig at the company. And, more than just talking about his legacy at Apple, we'll be looking into what this long-awaited shift actually means for the future of one of the world's biggest companies. We'll also get into why SpaceX and Cursor's potential $60 billion deal announced this week is pretty staggering, and we'll get into Palantir's controversial 22-point manifesto. I feel like manifesto's inherently controversial, otherwise they'd be memos that they posted on X this week.


SpaceX secures option to buy AI startup Cursor for 60bn or partner for 10bn

The Guardian

Elon Musk speaks at the SpaceX Hyperloop Pod Competition II in Hawthorne, California, in 2017. Elon Musk speaks at the SpaceX Hyperloop Pod Competition II in Hawthorne, California, in 2017. Cursor is a Silicon Valley startup using AI to automate coding as Elon Musk's firm seeks foothold in the AI market SpaceX said it has secured an option to either acquire code-generation startup Cursor for $60bn later this year, or pay $10bn for their new partnership, as it pushes deeper into the lucrative market for AI developer tools. Along with OpenAI and Anthropic, Cursor is one of several Silicon Valley startups that has drawn waves of developers by using artificial intelligence to automate coding, a business where AI companies have found early commercial traction. The deal could give xAI, the Grok chatbot maker that SpaceX merged with in February, a stronger foothold in the AI coding market where it has so far lagged rivals.


SpaceX and Cursor strike partnership that might end in a 60 billion acquisition

Engadget

The X and xAI owner is now working closely together with the maker of the AI coding tool. The xAI and SpaceX logos appear on a smartphone screen placed on a reflective surface onto which an abstract black and blue illustration is projected. SpaceX and AI company Cursor have struck a new partnership that could see the owner of X buy the AI company for $60 billion later this year. SpaceXAI and @cursor_ai are now working closely together to create the world's best coding and knowledge work AI, SpaceX wrote in a post on X. SpaceXAI and @cursor_ai are now working closely together to create the world's best coding and knowledge work AI. According to SpaceX, the deal allows for it to either invest $10 billion into the company known for its AI coding tool, or acquire it entirely later this year for $60 billion.


Cursor Launches an AI Coding Tool For Designers

WIRED

The 300-person startup hopes bringing designers aboard will give it an edge in an increasingly competitive AI software market. Cursor, the wildly popular AI coding startup, is launching a new feature that lets people design the look and feel of web applications with AI. The tool, Visual Editor, is essentially a vibe-coding product for designers, giving them access to the same fine-grained controls they'd expect from professional design software. But in addition to making changes manually, the tool lets them request edits from Cursor's AI agent using natural language. Cursor is best known for its AI coding platform, but with Visual Editor, the startup wants to capture other parts of the software creation process.



Taught by the Flawed: How Dataset Insecurity Breeds Vulnerable AI Code

Xia, Catherine, Alalfi, Manar H.

arXiv.org Artificial Intelligence

AI programming assistants have demonstrated a tendency to generate code containing basic security vulnerabilities. While developers are ultimately responsible for validating and reviewing such outputs, improving the inherent quality of these generated code snippets remains essential. A key contributing factor to insecure outputs is the presence of vulnerabilities in the training datasets used to build large language models (LLMs). To address this issue, we propose curating training data to include only code that is free from detectable vulnerabilities. In this study, we constructed a secure dataset by filtering an existing Python corpus using a static analysis tool to retain only vulnerability-free functions. We then trained two transformer-based models: one on the curated dataset and one on the original, unfiltered dataset. The models were evaluated on both the correctness and security of the code they generated in response to natural language function descriptions. Our results show that the model trained on the curated dataset produced outputs with fewer security issues, while maintaining comparable functional correctness. These findings highlight the importance of secure training data in improving the reliability of AI-based programming assistants, though further enhancements to model architecture and evaluation are needed to reinforce these outcomes.


MCPSecBench: A Systematic Security Benchmark and Playground for Testing Model Context Protocols

Yang, Yixuan, Wu, Daoyuan, Chen, Yufan

arXiv.org Artificial Intelligence

Large Language Models (LLMs) are increasingly integrated into real-world applications via the Model Context Protocol (MCP), a universal, open standard for connecting AI agents with data sources and external tools. While MCP enhances the capabilities of LLM-based agents, it also introduces new security risks and expands their attack surfaces. In this paper, we present the first systematic taxonomy of MCP security, identifying 17 attack types across 4 primary attack surfaces. Our benchmark is modular and extensible, allowing researchers to incorporate custom implementations of clients, servers, and transport protocols for systematic security assessment. Experimental results show that over 85% of the identified attacks successfully compromise at least one platform, with core vulnerabilities universally affecting Claude, OpenAI, and Cursor, while prompt-based and tool-centric attacks exhibit considerable variability across different hosts and models. In addition, current protection mechanisms have little effect against these attacks. Large language models (LLMs) are transforming the landscape of intelligent systems, enabling powerful language understanding, reasoning, and generative capabilities. To further unlock their potential in real-world applications, there is an increasing demand for LLMs to interact with external data, tools, and services (Lin et al., 2025; Hasan et al., 2025). The Model Context Protocol (MCP) has emerged as a universal, open standard for connecting AI agents to diverse resources, facilitating richer and more dynamic task-solving. However, this integration also introduces a broader attack surface: vulnerabilities may arise not only from user prompts (such as prompt injection (Shi et al., 2024)), but also from insecure clients, transport protocols, and malicious or misconfigured servers (Hasan et al., 2025). As MCP-powered agents increasingly interact with sensitive enterprise systems and even physical infrastructure, securing the entire MCP stack becomes critical to prevent data breaches, unauthorized actions, and real-world harm (Narajala & Habler, 2025).


Paper2Video: Automatic Video Generation from Scientific Papers

Zhu, Zeyu, Lin, Kevin Qinghong, Shou, Mike Zheng

arXiv.org Artificial Intelligence

Academic presentation videos have become an essential medium for research communication, yet producing them remains highly labor-intensive, often requiring hours of slide design, recording, and editing for a short 2 to 10 minutes video. Unlike natural video, presentation video generation involves distinctive challenges: inputs from research papers, dense multi-modal information (text, figures, tables), and the need to coordinate multiple aligned channels such as slides, subtitles, speech, and human talker. To address these challenges, we introduce Paper2Video, the first benchmark of 101 research papers paired with author-created presentation videos, slides, and speaker metadata. We further design four tailored evaluation metrics--Meta Similarity, PresentArena, PresentQuiz, and IP Memory--to measure how videos convey the paper's information to the audience. Building on this foundation, we propose PaperTalker, the first multi-agent framework for academic presentation video generation. It integrates slide generation with effective layout refinement by a novel effective tree search visual choice, cursor grounding, subtitling, speech synthesis, and talking-head rendering, while parallelizing slide-wise generation for efficiency. Experiments on Paper2Video demonstrate that the presentation videos produced by our approach are more faithful and informative than existing baselines, establishing a practical step toward automated and ready-to-use academic video generation. Our dataset, agent, and code are available at https://github.com/showlab/Paper2Video.


Learning GUI Grounding with Spatial Reasoning from Visual Feedback

Zhao, Yu, Chen, Wei-Ning, Inan, Huseyin Atahan, Kessler, Samuel, Wang, Lu, Wutschitz, Lukas, Yang, Fangkai, Zhang, Chaoyun, Minervini, Pasquale, Rajmohan, Saravan, Sim, Robert

arXiv.org Artificial Intelligence

Graphical User Interface (GUI) grounding is commonly framed as a coordinate prediction task -- given a natural language instruction, generate on-screen coordinates for actions such as clicks and keystrokes. However, recent Vision Language Models (VLMs) often fail to predict accurate numeric coordinates when processing high-resolution GUI images with complex layouts. To address this issue, we reframe GUI grounding as an \emph{interactive search task}, where the VLM generates actions to move a cursor in the GUI to locate UI elements. At each step, the model determines the target object, evaluates the spatial relations between the cursor and the target, and moves the cursor closer to the target conditioned on the movement history. In this interactive process, the rendered cursor provides visual feedback to help the model align its predictions with the corresponding on-screen locations. We train our GUI grounding model, GUI-Cursor, using multi-step online reinforcement learning with a dense trajectory-based reward function. Our experimental results show that GUI-Cursor, based on Qwen2.5-VL-7B, improves the GUI grounding accuracy and achieves state-of-the-art results on ScreenSpot-v2 ($88.8\% \rightarrow 93.9\%$) and ScreenSpot-Pro ($26.8\% \rightarrow 56.5\%$). Moreover, we observe that GUI-Cursor learns to solve the problem within two steps for 95\% of instances and can adaptively conduct more steps on more difficult examples.


Researchers created a soft squeezable computer mouse

Popular Science

'The mouse is long overdue for reinvention.' Breakthroughs, discoveries, and DIY tips sent every weekday. Many of us subscribe to the old adage, "If it ain't broke, don't fix it." But what if that something was actually broken all along and we just didn't realize it? That's the argument presented in an upcoming issue of the journal by researchers from Nazarbayev University in Kazakhstan.