Goto

Collaborating Authors

 theia


AI-powered bat tracking could give baseball players the edge

FOX News

This material may not be published, broadcast, rewritten, or redistributed. Quotes displayed in real-time or delayed by at least 15 minutes. Market data provided by Factset . Powered and implemented by FactSet Digital Solutions . Mutual Fund and ETF data provided by Refinitiv Lipper .


VER: Vision Expert Transformer for Robot Learning via Foundation Distillation and Dynamic Routing

Wang, Yixiao, Huo, Mingxiao, Liang, Zhixuan, Du, Yushi, Sun, Lingfeng, Lin, Haotian, Shang, Jinghuan, Peng, Chensheng, Bansal, Mohit, Ding, Mingyu, Tomizuka, Masayoshi

arXiv.org Artificial Intelligence

Pretrained vision foundation models (VFMs) advance robotic learning via rich visual representations, yet individual VFMs typically excel only in specific domains, limiting generality across tasks. Distilling multiple VFMs into a unified representation for policy can mitigate this limitation but often yields inflexible task-specific feature selection and requires costly full re-training to incorporate robot-domain knowledge. We propose VER, a Vision Expert transformer for Robot learning. During pretraining, VER distills multiple VFMs into a vision expert library. It then fine-tunes only a lightweight routing network (fewer than 0.4% of parameters) to dynamically select task-relevant experts from the pretrained library for downstream robot tasks. We further introduce Patchwise Expert Routing with Curriculum Top-K Annealing to improve both flexibility and precision of dynamic expert selection. Moreover, VER supports parameter-efficient finetuning for scalable expert utilization and adaptive robot-domain knowledge integration. Across 17 diverse robotic tasks and multiple policy heads, VER achieves state-of-the-art performance. We find that VER reduces large-norm outliers in task-irrelevant regions (e.g., background) and concentrates on task-critical regions. Visualizations and codes can be found in https://yixiaowang7.github.io/ver_page/.


A Collision With Another Planet Could Have Allowed for Life on Earth

WIRED

Analysis by researchers at the University of Bern suggests that water and other volatile compounds arrived on Earth from outer space--specifically via a collision with a Mars-sized planet billions of years ago. Many scientists believe that in its infancy, Earth collided with another world the size of Mars, and that instead of being destroyed, it was transformed, incorporating the mass of that foreign body to become the planet we know. Recent research adds another layer of relevance to that hypothesized cosmic event: Scientists believe that without that other body, the basic conditions for life to emerge on Earth might never have appeared. A team from the University of Bern in Switzerland argues that, due to its proximity to the sun, the proto-Earth that existed before this potential collision lost the volatile elements essential to form complex molecules. Any hydrogen, carbon, or sulfur, their analysis suggests, evaporated in just the first 3 million years after proto-Earth's formation.


Handheld 'robotic guide dog' will help people with visual impairments

#artificialintelligence

A student has designed a handheld'robotic guide dog' to help support people with visual impairments who are unable to house a real assistance animal. Loughborough University design engineer Anthony Camu was inspired to develop the device by responsive virtual reality gaming controllers. Dubbed'Theia' -- after the Titan goddess of light in Greek mythology -- the prototype can replicate the key functions of a real guide dog. The voice-activated device can program quick and safe routes to given destinations using real-time online data -- much like a car's satnav -- and onboard sensors. Force feedback delivered through Theia's handle then helps direct the user -- creating a sensation the designers say is similar to the pull of a guide dog's leash.


Inventive snapper creates stunning moon photos by combining 50,000 photos

Daily Mail - Science & tech

A cameraman has taken backyard photography of the night sky to the next level by creating a stunningly clear image of the moon that combines 50,000 separate snaps. Andrew McCarthy, from Elk Grove, California, took the stunning 81 megapixel shot that shows half the moon shrouded in darkness while the other is illuminated. Rather than using high-tech equipment costing tens of thousands, Andrew simply used two regular cameras to capture the different parts of the moon, then stitch them together using computer software. The final image is a whopping 81 megapixels in size - around three times higher resolution than found in cutting edge 8K TV sets - because of this unique approach. Two regular cameras caught different parts of the moon. Sections of the moon were captured in what Mr McCarthy calls'tiles' - images stitched together using Photoshop - before the best and clearest shots were chosen, averaging out factors like blurring.