tidiness
Tidiness Score-Guided Monte Carlo Tree Search for Visual Tabletop Rearrangement
Kee, Hogun, Oh, Wooseok, Kang, Minjae, Ahn, Hyemin, Oh, Songhwai
-- In this paper, we present the tidiness score-guided Monte Carlo tree search (TSMCTS), a novel framework designed to address the tabletop tidying up problem using only an RGB-D camera. We address two major problems for tabletop tidying up problem: (1) the lack of public datasets and benchmarks, and (2) the difficulty of specifying the goal configuration of unseen objects. We address the former by presenting the tabletop tidying up (TTU) dataset, a structured dataset collected in simulation. Using this dataset, we train a vision-based discriminator capable of predicting the tidiness score. This discriminator can consistently evaluate the degree of tidiness across unseen configurations, including real-world scenes. Addressing the second problem, we employ Monte Carlo tree search (MCTS) to find tidying trajectories without specifying explicit goals. Instead of providing specific goals, we demonstrate that our MCTS-based planner can find diverse tidied configurations using the tidiness score as a guidance. Consequently, we propose TSMCTS, which integrates a tidiness discriminator with an MCTS-based tidying planner to find optimal tidied arrangements. TSMCTS has successfully demonstrated its capability across various environments, including coffee tables, dining tables, office desks, and bathrooms. In this paper, we address the tabletop tidying problem, where an embodied AI agent autonomously organizes objects on a table based on their composition. As depicted in Figure 1, tidying up involves rearranging objects by determining an appropriate configuration of given objects, without providing an explicit target configuration.
- North America > United States (0.29)
- Asia > South Korea > Ulsan > Ulsan (0.04)
- Asia > South Korea > Seoul > Seoul (0.04)
"Tidy Up the Table": Grounding Common-sense Objective for Tabletop Object Rearrangement
Tidying up a messy table may appear simple for humans, but articulating clear criteria for tidiness is challenging due to the ambiguous nature of common sense reasoning. Large Language Models (LLMs) have proven capable of capturing common sense knowledge to reason over this vague concept of tidiness. However, they alone may struggle with table tidying due to the limited grasp on the spatio-visual aspects of tidiness. In this work, we aim to ground the common-sense concept of tidiness within the context of object arrangement. Our survey reveals that humans usually factorize tidiness into semantic and visual-spatial tidiness; our grounding approach aligns with this decomposition. We connect a language-based policy generator with an image-based tidiness score function: the policy generator utilizes the LLM's commonsense knowledge to cluster objects by their implicit types and functionalities for semantic tidiness; meanwhile, the tidiness score function assesses the visual-spatial relations of the object to achieve visual-spatial tidiness. Our tidiness score is trained using synthetic data generated cheaply from customized random walks, which inherently encode the order of tidiness, thereby bypassing the need for labor-intensive human demonstrations. The simulated experiment shows that our approach successfully generates tidy arrangements, predominately in 2D, with potential for 3D stacking, for tables with various novel objects.