Goto

Collaborating Authors

 bunny


Celebrity appearances, controversial ads and other Super Bowl takeaways

BBC News

Latin megastar Bad Bunny performed a medley of his top hits at the Super Bowl on Sunday in a star-studded show that was criticised as terrible by the US president. The Puerto Rican singer, also known as Benito Antonio Martinez Ocasio, was joined on stage by a host of fellow music stars including Lady Gaga, Ricky Martin and Cardi B. Sitting in the stands, Kim Kardashian and Lewis Hamilton made their first major public appearance together, after weeks of speculation about their romance. The seven-time Formula 1 world champion and the reality TV star were spotted chatting and smiling together during the game, and were caught on video by NBC News. Fellow musical superstars Lady Gaga, Cardi B and Jessica Alba joined the dancers on stage alongside Bad Bunny, who was the world's most-played artist in 2025 on Spotify, according to the streaming service. Chilean-American actor Pedro Pascal and Puerto Rican singer Ricky Martin also joined the performance, which was populated by a largely pan-American crowd of celebrities.


Length-MAX Tokenizer for Language Models

Dong, Dong, Su, Weijie

arXiv.org Artificial Intelligence

We introduce a new tokenizer for language models that minimizes the average tokens per character, thereby reducing the number of tokens needed to represent text during training and to generate text during inference. Our method, which we refer to as the Length-MAX tokenizer, obtains its vocabulary by casting a length-weighted objective maximization as a graph partitioning problem and developing a greedy approximation algorithm. On FineWeb and diverse domains, it yields 14--18\% fewer tokens than Byte Pair Encoding (BPE) across vocabulary sizes from 10K to 50K, and the reduction is 13.0\% when the size is 64K. Training GPT-2 models at 124M, 355M, and 1.3B parameters from scratch with five runs each shows 18.5\%, 17.2\%, and 18.5\% fewer steps, respectively, to reach a fixed validation loss, and 13.7\%, 12.7\%, and 13.7\% lower inference latency, together with a 16\% throughput gain at 124M, while consistently improving on downstream tasks including reducing LAMBADA perplexity by 11.7\% and enhancing HellaSwag accuracy by 4.3\%. Moreover, the Length-MAX tokenizer achieves 99.62\% vocabulary coverage and the out-of-vocabulary rate remains low at 0.12\% on test sets. These results demonstrate that optimizing for average token length, rather than frequency alone, offers an effective approach to more efficient language modeling without sacrificing -- and often improving -- downstream performance. The tokenizer is compatible with production systems and reduces embedding and KV-cache memory by 18\% at inference.


Kids as young as 4 innately use sorting algorithms to solve problems

New Scientist

It was previously thought that children younger than 7 couldn't find efficient solutions to complex problems, but new research suggests that much earlier, children can happen upon known sorting algorithms used by computer scientists Complex problem-solving may arise earlier in a child's development than previously thought Children as young as 4 years old are capable of finding efficient solutions to complex problems, such as independently inventing sorting algorithms developed by computer scientists. The scientists behind the finding say these skills emerge far earlier than previously thought, and should force a rethink of developmental psychology. Take control of your brain's master switch to optimise how you think Experiments carried out by Swiss psychologist Jean Piaget and widely popularised in the 1960s asked children to physically sort a collection of sticks into length order, a task Piaget called seriation. His tests revealed until around age 7, children applied no structured strategies; they approached the problem in messy ways through trial and error. But new research by Huiwen Alex Yang and his colleagues at University of California, Berkeley, shows a minority of even 4-year-old children can develop algorithmic solutions to the same task, and by 5 years old more than a quarter are capable of the same thing.


DAN GAINOR: Demon rabbits, Taylor and Travis, hot dog havoc: August's 7 wildest stories

FOX News

Singer-songwriter Taylor Swift and NFL tight end Travis Kelce announced their engagement on Instagram after two years of dating. I bet you thought bunnies were nice, normal, cuddly critters -- except for the vorpal bunny of "Monty Python" fame. Turns out, we were all wrong. According to The Associated Press, there's a group of rabbits in Colorado with grotesque horn-like growths that may seem straight out of a low-budget horror film. Hide your kids, hide your wives and dig out your VHS copy of "Night of the Lepus."


Helen Phillips's "Hum," Reviewed

The New Yorker

"Hum," Helen Phillips's third novel, begins with a needle being drawn, steadily and irreversibly, across a woman named May's face. She is participating in a paid experiment in "adversarial tech," undergoing a procedure that will ever so slightly alter her features, making her harder for surveillance cameras to identify. As the book opens, May is mid-op, the needle advancing its "slender and relentless line of penetration" across her temple, toward the skin of her eyelid. What lies on the other side of the surgery? "Some sort of transformation, undeniable but undetectable," Phillips writes.


'Robot whisperer' trains AI-powered 'dogs' to PAINT abstract paintings that resemble human pieces selling for more than $2,000 at auction... Can YOU spot the machines' designs?

Daily Mail - Science & tech

Although Pilat works with the robots daily, she told The Guardian that she still doesn't fully understand them, so she works with Boston Dynamics engineers to shape the robot dogs' personalities. Together, Pilat and the engineers use AI, software, and machine learning to train the robots and even use Basia as a surrogate pet, frequently taking it for walks around New York. Basia is the'serious one,' Pilat told The Guardian, adding that the robot dog will paint about one canvas every three days, while Vanya is the'mother of the group' and paces around the studio. Meanwhile, Bunny's vanity often wins out, and it will continuously pose in front of a wall designed for selfies.


Robot dogs have unnerved and angered the public. So why is this artist teaching them to paint?

The Guardian

The artist is completely focused, a black oil crayon in her hand as she repeatedly draws a small circle on a vibrant teal canvas. She is unbothered by the three people closely observing her every movement, and doesn't seem to register my entrance into this bright white room inside the National Gallery of Victoria. The artist is a robot; more specifically, Basia is a 30kg "Spot" robot dog designed by Boston Dynamics. You've likely seen videos of these dogs opening doors, climbing stairs and decorating Christmas trees, while performing eerily fluid actions that cause people to write comments like, "Can't wait to have a pack of these chase me through a post-apocalyptic urban hellscape!" The robots are designed to perform tasks that are dangerous for humans: they tend to be bought by mining and construction corporations, as well as police and the military.


Magic3D: High-Resolution Text-to-3D Content Creation

Lin, Chen-Hsuan, Gao, Jun, Tang, Luming, Takikawa, Towaki, Zeng, Xiaohui, Huang, Xun, Kreis, Karsten, Fidler, Sanja, Liu, Ming-Yu, Lin, Tsung-Yi

arXiv.org Artificial Intelligence

DreamFusion has recently demonstrated the utility of a pre-trained text-to-image diffusion model to optimize Neural Radiance Fields (NeRF), achieving remarkable text-to-3D synthesis results. However, the method has two inherent limitations: (a) extremely slow optimization of NeRF and (b) low-resolution image space supervision on NeRF, leading to low-quality 3D models with a long processing time. In this paper, we address these limitations by utilizing a two-stage optimization framework. First, we obtain a coarse model using a low-resolution diffusion prior and accelerate with a sparse 3D hash grid structure. Using the coarse representation as the initialization, we further optimize a textured 3D mesh model with an efficient differentiable renderer interacting with a high-resolution latent diffusion model. Our method, dubbed Magic3D, can create high quality 3D mesh models in 40 minutes, which is 2x faster than DreamFusion (reportedly taking 1.5 hours on average), while also achieving higher resolution. User studies show 61.7% raters to prefer our approach over DreamFusion. Together with the image-conditioned generation capabilities, we provide users with new ways to control 3D synthesis, opening up new avenues to various creative applications.


Differentiable Physics Simulation of Dynamics-Augmented Neural Objects

Cleac'h, Simon Le, Yu, Hong-Xing, Guo, Michelle, Howell, Taylor A., Gao, Ruohan, Wu, Jiajun, Manchester, Zachary, Schwager, Mac

arXiv.org Artificial Intelligence

We present a differentiable pipeline for simulating the motion of objects that represent their geometry as a continuous density field parameterized as a deep network. This includes Neural Radiance Fields (NeRFs), and other related models. From the density field, we estimate the dynamical properties of the object, including its mass, center of mass, and inertia matrix. We then introduce a differentiable contact model based on the density field for computing normal and friction forces resulting from collisions. This allows a robot to autonomously build object models that are visually and \emph{dynamically} accurate from still images and videos of objects in motion. The resulting Dynamics-Augmented Neural Objects (DANOs) are simulated with an existing differentiable simulation engine, Dojo, interacting with other standard simulation objects, such as spheres, planes, and robots specified as URDFs. A robot can use this simulation to optimize grasps and manipulation trajectories of neural objects, or to improve the neural object models through gradient-based real-to-simulation transfer. We demonstrate the pipeline to learn the coefficient of friction of a bar of soap from a real video of the soap sliding on a table. We also learn the coefficient of friction and mass of a Stanford bunny through interactions with a Panda robot arm from synthetic data, and we optimize trajectories in simulation for the Panda arm to push the bunny to a goal location.


Transformer with Implicit Edges for Particle-based Physics Simulation

Shao, Yidi, Loy, Chen Change, Dai, Bo

arXiv.org Artificial Intelligence

Particle-based systems provide a flexible and unified way to simulate physics systems with complex dynamics. Most existing data-driven simulators for particle-based systems adopt graph neural networks (GNNs) as their network backbones, as particles and their interactions can be naturally represented by graph nodes and graph edges. However, while particle-based systems usually contain hundreds even thousands of particles, the explicit modeling of particle interactions as graph edges inevitably leads to a significant computational overhead, due to the increased number of particle interactions. Consequently, in this paper we propose a novel Transformer-based method, dubbed as Transformer with Implicit Edges (TIE), to capture the rich semantics of particle interactions in an edge-free manner. The core idea of TIE is to decentralize the computation involving pair-wise particle interactions into per-particle updates. This is achieved by adjusting the self-attention module to resemble the update formula of graph edges in GNN. To improve the generalization ability of TIE, we further amend TIE with learnable material-specific abstract particles to disentangle global material-wise semantics from local particle-wise semantics. We evaluate our model on diverse domains of varying complexity and materials. Compared with existing GNN-based methods, without bells and whistles, TIE achieves superior performance and generalization across all these domains. Codes and models are available at https://github.com/ftbabi/TIE_ECCV2022.git.