Rubik's Cube
College students demolish world record for fastest Rubik's cube robot
Breakthroughs, discoveries, and DIY tips sent every weekday. Mitsubishi's bragging rights for designing the world's fastest Rubik's cube-solving robot have officially been stolen by a team of undergrads in Indiana. Earlier this month, Purdue University announced four collaborators in its Elmore Family School of Electrical and Computer Engineering (ECE) successfully designed and built a bot that not only set the new Guinness World Record--it absolutely demolished the multinational company's previous time. Meet Purdubik's Cube: a machine capable of completing a randomly shuffled Rubik's cube in just 0.103 seconds. At 1-2 times faster than the blink of a human eye, the feat is difficult to see, much less comprehend.
CubeRobot: Grounding Language in Rubik's Cube Manipulation via Vision-Language Model
Wang, Feiyang, Yu, Xiaomin, Wu, Wangyu
Proving Rubik's Cube theorems at the high level represents a notable milestone in human-level spatial imagination and logic thinking and reasoning. Traditional Rubik's Cube robots, relying on complex vision systems and fixed algorithms, often struggle to adapt to complex and dynamic scenarios. To overcome this limitation, we introduce CubeRobot, a novel vision-language model (VLM) tailored for solving 3x3 Rubik's Cubes, empowering embodied agents with multimodal understanding and execution capabilities. We used the CubeCoT image dataset, which contains multiple-level tasks (43 subtasks in total) that humans are unable to handle, encompassing various cube states. We incorporate a dual-loop VisionCoT architecture and Memory Stream, a paradigm for extracting task-related features from VLM-generated planning queries, thus enabling CubeRobot to independent planning, decision-making, reflection and separate management of high- and low-level Rubik's Cube tasks. Furthermore, in low-level Rubik's Cube restoration tasks, CubeRobot achieved a high accuracy rate of 100%, similar to 100% in medium-level tasks, and achieved an accuracy rate of 80% in high-level tasks.
" Rubik's Cube: High-Order Channel Interactions with a Hierarchical Receptive Field " Supplementary Material Anonymous Author(s) Affiliation Address email
Section 2 provides the implementation details of Rubik's cube convolution within the image restoration Section 3 provides the evaluation of our proposed Rubik's cube convolution on the classification task. Section 4 provides more quantitative and qualitative results. Specifically, the input feature is separated into five groups, where the last four are shifted into four direction and the first is unchanged. When a up-shifting group interacts the next downshifting group, the (i, j) pixel will interweave with its neighboring pixel, (i + p, j), where p denotes the number of shifted pixels. Therefore, with the combined action of the shifting and interaction operation, the receptive field will be expanded along the downward direction.
A Machine Learning Approach That Beats Large Rubik's Cubes
Chervov, Alexander, Khoruzhii, Kirill, Bukhal, Nikita, Naghiyev, Jalal, Zamkovoy, Vladislav, Koltsov, Ivan, Cheldieva, Lyudmila, Sychev, Arsenii, Lenin, Arsenii, Obozov, Mark, Urvanov, Egor, Romanov, Alexey
The paper proposes a novel machine learning-based approach to the pathfinding problem on extremely large graphs. This method leverages diffusion distance estimation via a neural network and uses beam search for pathfinding. We demonstrate its efficiency by finding solutions for 4x4x4 and 5x5x5 Rubik's cubes with unprecedentedly short solution lengths, outperforming all available solvers and introducing the first machine learning solver beyond the 3x3x3 case. In particular, it surpasses every single case of the combined best results in the Kaggle Santa 2023 challenge, which involved over 1,000 teams. For the 3x3x3 Rubik's cube, our approach achieves an optimality rate exceeding 98%, matching the performance of task-specific solvers and significantly outperforming prior solutions such as DeepCubeA (60.3%) and EfficientCube (69.6%). Additionally, our solution is more than 26 times faster in solving 3x3x3 Rubik's cubes while requiring up to 18.5 times less model training time than the most efficient state-of-the-art competitor.
Wormhole Memory: A Rubik's Cube for Cross-Dialogue Retrieval
In view of the gap in the current large language model in sharing memory across dialogues, this research proposes a wormhole memory module (WMM) to realize memory as a Rubik's cube that can be arbitrarily retrieved between different dialogues. Through simulation experiments, the researcher built an experimental framework based on the Python environment and used setting memory barriers to simulate the current situation where memories between LLMs dialogues are difficult to share. The CoQA development data set was imported into the experiment, and the feasibility of its cross-dialogue memory retrieval function was verified for WMM's nonlinear indexing and dynamic retrieval, and a comparative analysis was conducted with the capabilities of Titans and MemGPT memory modules. Experimental results show that WMM demonstrated the ability to retrieve memory across dialogues and the stability of quantitative indicators in eight experiments. It contributes new technical approaches to the optimization of memory management of LLMs and provides experience for the practical application in the future.
Review for NeurIPS paper: Model Rubik's Cube: Twisting Resolution, Depth and Width for TinyNets
This paper still considers the only resolution, depth and width dimensions, which have been studied in EfficientNet. Although the discovery in this paper that "resolution and depth are more important than width for tiny networks" is different from the conclusion in EfficientNet, I feel this point is not significant enough and it seems like just a supplement for EfficientNet. I'm not saying that this kind of method is not good, but I think the insights and intuitions why resolution and depth are more important than width for small networks (derived from this way) are still not clear. In my opinion, this paper is basically doing random search by shrinking the EfficientNet-B0 structure configurations on the mentioned three dimensions, I believe the derived observation is useful but the method itself contains very limited value to the community. Even some simple searching method like evolutionary searching can achieve similar or the same purpose through a more efficient way.
Review for NeurIPS paper: Model Rubik's Cube: Twisting Resolution, Depth and Width for TinyNets
The paper received mixed ratings: two reviewers recommend acceptance, and two reviewers consider the paper is marginally below the threshold. All reviewers agree that the paper provides useful insights, e.g., the observation that resolution and depth are more important than width for tiny networks. The main concerns raised by the reviewers were (i) novelty is not highly significant/the method is too heuristic (ii) issues with experiments and lack of analysis on other tasks, such as object detection. The rebuttal helped clarify several other questions raised by the reviewers, and included new experiments on COCO object detection using Faster-RCNN. All reviewers actively participated in the discussion phase.
Node Classification and Search on the Rubik's Cube Graph with GNNs
This study focuses on the application of deep geometric models to solve the 3x3x3 Rubik's Cube. We begin by discussing the cube's graph representation and defining distance as the model's optimization objective. The distance approximation task is reformulated as a node classification problem, effectively addressed using Graph Neural Networks (GNNs). After training the model on a random subgraph, the predicted classes are used to construct a heuristic for $A^*$ search. We conclude with experiments comparing our heuristic to that of the DeepCubeA model.
Solving Rubik's Cube Without Tricky Sampling
The Rubik's Cube, with its vast state space and sparse reward structure, presents a significant challenge for reinforcement learning (RL) due to the difficulty of reaching rewarded states. Previous research addressed this by propagating cost-to-go estimates from the solved state and incorporating search techniques. These approaches differ from human strategies that start from fully scrambled cubes, which can be tricky for solving a general sparse-reward problem. In this paper, we introduce a novel RL algorithm using policy gradient methods to solve the Rubik's Cube without relying on near solved-state sampling. Our approach employs a neural network to predict cost patterns between states, allowing the agent to learn directly from scrambled states. Our method was tested on the 2x2x2 Rubik's Cube, where the cube was scrambled 50,000 times, and the model successfully solved it in over 99.4% of cases. Notably, this result was achieved using only the policy network without relying on tree search as in previous methods, demonstrating its effectiveness and potential for broader applications in sparse-reward problems.
Quantum Rubik's cube has infinite patterns but is still solvable
How many moves does it take to solve a Rubik's cube if it is quantum? A quantum Rubik's cube would be infinitely more complex than the traditional puzzle, but mathematical modelling shows it wouldn't be unsolvable. In the summer of 2022, Noah Lordi and Maedée Trank-Greene at the University of Colorado Boulder and their colleagues made a bet: how many possible states would a quantum Rubik's cube have? To even make their guesses, they first had to define what a quantum Rubik's cube would entail.