Goto

Collaborating Authors

 fridge



Large Language Models as Commonsense Knowledge for Large-Scale Task Planning Anonymous Author(s) Affiliation Address email Appendix 1 A Experimental environments 2 We use the VirtualHome simulator [

Neural Information Processing Systems

A.1 List of objects, containers, surfaces, and rooms in the apartment We list all the objects that are included in our experimental environment. We use the object rearrangement tasks for evaluation. The tasks are randomly sampled from different distributions. Simple: this task is to move one object in the house to the desired location. Novel Simple: this task is to move one object in the house to the desired location.






Further Details

Neural Information Processing Systems

ReplicaCADisonehousewith 111 layouts (6 fromrealscans, 1 05 artistcreated), The TDWTransporter Challenge [7] has 15 houses, ManipulaTHOR [33] has 30 rooms, RoomRfrom Visual Room Rearrangement [58] has 120 rooms, iGibson [36] has 15 houses, and VirtualHome [51] has 6 houses.


Samsung Bespoke Fridge with AI review: All the bells and whistles

Engadget

How to claim Verizon's $20 outage credit While Samsung's AI Vision and food tracking is a work in progress, it can still be genuinely useful. At their core, refrigerators are relatively simple devices. If you're the type of person to view every extra feature as a component that could potentially go wrong, basic iceboxes are probably the kind you go for. But for those on the other end of the spectrum, Samsung's latest Bespoke Refrigerators with AI inside have more bells and whistles than you might think possible -- including an optional 32-inch screen. The model we tested for this review came out in the second half of 2025 and will continue to be on sale throughout 2026. Hardware will remain the same, the only changes will come in the form of an OTA software update slated for later this year that will add support for Google Gemini, improved food recognition/labeling and more.


Robots that can do laundry and more, plus unrolling laptops: the standout tech from CES 2026

The Guardian

A Sharpa North robot uses a camera at CES 2026 in Las Vegas. A Sharpa North robot uses a camera at CES 2026 in Las Vegas. ESTLast modified on Fri 9 Jan 2026 10.00 EST This year will be filled with robots that can fold your laundry, pick up objects and climb stairs, fridges that you can command to open by voice, laptops with screens that can follow you around the room on motorised hinges and the reimagining of the BlackBerry phone. Those are the predictions from the annual CES tech show in Las Vegas that took place this week. The sprawling event aims to showcase cutting-edge technology developed by startups and big brands. Many of these fancy developments will be available to actually buy, moving from outlandish concepts to production devices, although some are still limited to costly prototypes.


Enhancing Cognitive Robotics with Commonsense through LLM-Generated Preconditions and Subgoals

Bachner, Ohad, Gamliel, Bar

arXiv.org Artificial Intelligence

Autonomous robots are increasingly deployed in dynamic and unstructured environments, where they must plan and execute complex tasks under uncertainty. Classical planning approaches, typically modeled in PDDL and solved with heuristic search, provide a principled foundation for task planning (Edelkamp and Schr odl, 2011; Geffner and Bonet, 2013). However, these methods rely on explicit domain models that enumerate preconditions and effects of actions. In practice, such models often omit implicit commonsense knowledge, for example, that a container must be upright before pouring, or that water must be boiled before making tea. The absence of such knowledge can lead to plans that are logically correct but physically invalid. Cognitive robotics research seeks to bridge symbolic reasoning with robot perception and control (Ghallab et al., 2004). While significant progress has been made in integrating planning with motion control and execution, robots still lack the ability to autonomously infer commonsense constraints that humans consider obvious. Large Language Models (LLMs), trained on massive corpora of human knowledge, present a promising avenue for addressing this gap. LLMs can generate likely preconditions, subgoals, and contextual constraints from natural language task descriptions, potentially enriching classical planning models. 1