Goto

Collaborating Authors

 blender



Best Portable Blenders of 2026: Ninja, Nutribullet, Beast

WIRED

A cordless, portable blender was barely possible a few years ago. But two cordless blenders are ahead of the pack. But battery tech keeps getting better. This means the best portable blender I've tested, the Ninja Blast Max ($100), is now fully able to make a six-pack of crushed-ice margaritas at your next picnic or blend up a berry-filled protein shake at the gym without breaking much of a sweat. Meanwhile, the ingeniously designed Nutribullet Flip ($115) offers more torque than previous-generation blenders, plus enough insulation to keep ice frozen until it's time for lunch (or even dinner).


James Cameron says AI actors are 'horrifying to me'

The Guardian

'Generative AI can't create something new' James Cameron. 'Generative AI can't create something new' James Cameron. James Cameron says AI actors are'horrifying to me' Avatar director, known for his advocacy of new technology, told interviewer generative AI performance puts'all human experience into a blender' Avatar director James Cameron has called AI actors "horrifying" and said what generative AI technology creates is "an average". Cameron was speaking to CBS on Sunday Morning in the run-up to the release of the third Avatar film, subtitled Fire and Ash, and was asked about the pioneering technology he used in his film-making. After praising motion-capture performance as "a celebration of the actor-director moment", Cameron expressed his disdain for artificial intelligence.


Compositional Symmetry as Compression: Lie Pseudogroup Structure in Algorithmic Agents

Ruffini, Giulio

arXiv.org Artificial Intelligence

In the algorithmic (Kolmogorov) view, agents are programs that track and compress sensory streams using generative programs. We propose a framework where the relevant structural prior is simplicity (Solomonoff) understood as \emph{compositional symmetry}: natural streams are well described by (local) actions of finite-parameter Lie pseudogroups on geometrically and topologically complex low-dimensional configuration manifolds (latent spaces). Modeling the agent as a generic neural dynamical system coupled to such streams, we show that accurate world-tracking imposes (i) \emph{structural constraints} -- equivariance of the agent's constitutive equations and readouts -- and (ii) \emph{dynamical constraints}: under static inputs, symmetry induces conserved quantities (Noether-style labels) in the agent dynamics and confines trajectories to reduced invariant manifolds; under slow drift, these manifolds move but remain low-dimensional. This yields a hierarchy of reduced manifolds aligned with the compositional factorization of the pseudogroup, providing a geometric account of the ``blessing of compositionality'' in deep models. We connect these ideas to the Spencer formalism for Lie pseudogroups and formulate a symmetry-based, self-contained version of predictive coding in which higher layers receive only \emph{coarse-grained residual transformations} (prediction-error coordinates) along symmetry directions unresolved at lower layers.



3Dify: a Framework for Procedural 3D-CG Generation Assisted by LLMs Using MCP and RAG

Hayashi, Shun-ichiro, Mukunoki, Daichi, Hoshino, Tetsuya, Ohshima, Satoshi, Katagiri, Takahiro

arXiv.org Artificial Intelligence

Abstract--This paper proposes "3Dify," a procedural 3D computer graphics (3D-CG) generation framework utilizing Large Language Models (LLMs). The framework enables users to generate 3D-CG content solely through natural language instructions. For 3D-CG generation support, 3Dify automates the operation of various Digital Content Creation (DCC) tools via MCP . When DCC tools do not support MCP-based interaction, the framework employs the Computer-Using Agent (CUA) method to automate Graphical User Interface (GUI) operations. Moreover, to enhance image generation quality, 3Dify allows users to provide feedback by selecting preferred images from multiple candidates. The LLM then learns variable patterns from these selections and applies them to subsequent generations. Furthermore, 3Dify supports the integration of locally deployed LLMs, enabling users to utilize custom-developed models and to reduce both time and monetary costs associated with external API calls by leveraging their own computational resources. Its applications extend beyond entertainment industries such as movies and games to areas including product design in manufacturing, surgical simulation in healthcare, education, and digital-twin technologies that replicate the real world within virtual spaces.


Treat yourself to these Prime Day deals on Breville espresso machines and juicers

Popular Science

Breville's high-end kitchen appliances don't go on sale often, but these Prime Day deals are a great time to upgrade. We may earn revenue from the products available on this page and participate in affiliate programs. If you're going to buy a kitchen appliance, you want it to be something that will work well and last a long time. Breville's high-end smart ovens, espresso machines, and blenders absolutely fit that bill. Right now, Amazon has them for their cheapest prices of the year during the Prime Big Deal Days sale.


A Synthetic Data Pipeline for Supporting Manufacturing SMEs in Visual Assembly Control

Werheid, Jonas, He, Shengjie, Gannouni, Aymen, Abdelrazeq, Anas, Schmitt, Robert H.

arXiv.org Artificial Intelligence

Quality control of assembly processes is essential in manufacturing to ensure not only the quality of individual components but also their proper integration into the final product. To assist in this matter, automated assembly control using computer vision methods has been widely implemented. However, the costs associated with image acquisition, annotation, and training of computer vision algorithms pose challenges for integration, especially for small- and medium-sized enterprises (SMEs), which often lack the resources for extensive training, data collection, and manual image annotation. Synthetic data offers the potential to reduce manual data collection and labeling. Nevertheless, its practical application in the context of assembly quality remains limited. In this work, we present a novel approach for easily integrable and data-efficient visual assembly control. Our approach leverages simulated scene generation based on computer-aided design (CAD) data and object detection algorithms. The results demonstrate a time-saving pipeline for generating image data in manufacturing environments, achieving a mean Average Precision (mAP@0.5:0.95) up to 99,5% for correctly identifying instances of synthetic planetary gear system components within our simulated training data, and up to 93% when transferred to real-world camera-captured testing data. This research highlights the effectiveness of synthetic data generation within an adaptable pipeline and underscores its potential to support SMEs in implementing resource-efficient visual assembly control solutions.


MCPWorld: A Unified Benchmarking Testbed for API, GUI, and Hybrid Computer Use Agents

Yan, Yunhe, Wang, Shihe, Du, Jiajun, Yang, Yexuan, Shan, Yuxuan, Qiu, Qichen, Jia, Xianqing, Wang, Xinge, Yuan, Xin, Han, Xu, Qin, Mao, Chen, Yinxiao, Peng, Chen, Wang, Shangguang, Xu, Mengwei

arXiv.org Artificial Intelligence

(M)LLM-powered computer use agents (CUA) are emerging as a transformative technique to automate human-computer interaction. However, existing CUA benchmarks predominantly target GUI agents, whose evaluation methods are susceptible to UI changes and ignore function interactions exposed by application APIs, e.g., Model Context Protocol (MCP). To this end, we propose MCPWorld, the first automatic CUA testbed for API, GUI, and API-GUI hybrid agents. A key principle of MCPWorld is the use of "white-box apps", i.e., those with source code availability and can be revised/re-compiled as needed (e.g., adding MCP support), with two notable advantages: (1) It greatly broadens the design space of CUA, such as what and how the app features to be exposed/extracted as CUA-callable APIs. (2) It allows MCPWorld to programmatically verify task completion by directly monitoring application behavior through techniques like dynamic code instrumentation, offering robust, accurate CUA evaluation decoupled from specific agent implementations or UI states. Currently, MCPWorld includes 201 well curated and annotated user tasks, covering diversified use cases and difficulty levels. MCPWorld is also fully containerized with GPU acceleration support for flexible adoption on different OS/hardware environments. Our preliminary experiments, using a representative LLM-powered CUA framework, achieve 75.12% task completion accuracy, simultaneously providing initial evidence on the practical effectiveness of agent automation leveraging MCP. Overall, we anticipate MCPWorld to facilitate and standardize the benchmarking of next-generation computer use agents that can leverage rich external tools. Our code and dataset are publicly available at https://github.com/SAAgent/MCPWorld.


Learning Underwater Active Perception in Simulation

Cardaillac, Alexandre, Dansereau, Donald G.

arXiv.org Artificial Intelligence

When employing underwater vehicles for the autonomous inspection of assets, it is crucial to consider and assess the water conditions. Indeed, they have a significant impact on the visibility, which also affects robotic operations. Turbidity can jeopardise the whole mission as it may prevent correct visual documentation of the inspected structures. Previous works have introduced methods to adapt to turbidity and backscattering, however, they also include manoeuvring and setup constraints. We propose a simple yet efficient approach to enable high-quality image acquisition of assets in a broad range of water conditions. This active perception framework includes a multi-layer perceptron (MLP) trained to predict image quality given a distance to a target and artificial light intensity. We generated a large synthetic dataset including ten water types with different levels of turbidity and backscattering. For this, we modified the modelling software Blender to better account for the underwater light propagation properties. We validated the approach in simulation and showed significant improvements in visual coverage and quality of imagery compared to traditional approaches. The project code is available on our project page at https://roboticimaging.org/Projects/ActiveUW/.