Goto

Collaborating Authors

 grabber


Running VLAs at Real-time Speed

Ma, Yunchao, Zhou, Yizhuang, Yang, Yunhuan, Wang, Tiancai, Fan, Haoqiang

arXiv.org Artificial Intelligence

In this paper, we show how to run pi0-level multi-view VLA at 30Hz frame rate and at most 480Hz trajectory frequency using a single consumer GPU. This enables dynamic and real-time tasks that were previously believed to be unattainable by large VLA models. To achieve it, we introduce a bag of strategies to eliminate the overheads in model inference. The real-world experiment shows that the pi0 policy with our strategy achieves a 100% success rate in grasping a falling pen task. Based on the results, we further propose a full streaming inference framework for real-time robot control of VLA. Code is available at https://github.com/Dexmal/realtime-vla.


Preserving Power Optimizations Across the High Level Synthesis of Distinct Application-Specific Circuits

Garcia, Paulo

arXiv.org Artificial Intelligence

We evaluate the use of software interpretation to push High Level Synthesis of application-specific accelerators toward a higher level of abstraction. Our methodology is supported by a formal power consumption model that computes the power consumption of accelerator components, accurately predicting the power consumption on new designs from prior optimization estimations. We demonstrate how our approach simplifies the re-use of power optimizations across distinct designs, by leveraging the higher level of design abstraction, using two accelerators representative of the robotics domain, implemented through the Bambu High Level Synthesis tool. Results support the research hypothesis, achieving predictions accurate within +/- 1%.


Sony shows off a robot grabber, 4K OLED panels for VR, and more

#artificialintelligence

Sony's holding its Technology Day event to show off what it's been working on in its R&D labs, and this year, we got some great visuals of tech the company's been working on. Amidst the rehashes of the PS5's haptics and 3D audio and a demo reel of Sony's admittedly awesome displays for making virtual movie sets, we got to see a robot hand that Sony said could figure out grip strength depending on what it was picking up, a slightly dystopian-sounding "global sensing system," and more. Perhaps the most interesting thing Sony showed off was a headset that featured OLED displays with "4K-per-inch" resolution. While the headset Sony used in its presentation was very clearly something intended for lab and prototype use, the specs Sony laid out for the panels were reminiscent of the rumors swirling around the PlayStation VR 2. They don't exactly line up, though; Sony said the headset it showed off was 8K, given the 4K display per eye, and the PS VR 2 will supposedly only be 4K overall with 2000 x 2040 pixels per eye. Still, it's exciting that Sony is working on VR-focused panels, along with latency reduction tech for them.


Robotic grabber catches squidgy deep sea animals without harming them

New Scientist

The deep sea is a challenging place to study wildlife, but a new foldable robotic grabber may make capturing underwater creatures a bit easier. Many deep sea animals, such as jellyfish and their relatives, have fragile bodies. This means catching them using suction or claw-like grabbers, can cause them to break apart, leaving broken pieces to study instead of whole organisms. To counteract this, Zhi Ern Teoh at Harvard University in Massachusetts and colleagues created a robotic grabber based on a regular dodecahedron – a 3D shape built from 12 pentagons. The grabber is used by attaching it to a remote controlled underwater vehicle or another type of submersible. It starts as a flat base that then gently folds around the animal.


Magnetic 3-D-printed structures crawl, roll, jump, and play catch

Robohub

By Jennifer Chu MIT engineers have created soft, 3-D-printed structures whose movements can be controlled with a wave of a magnet, much like marionettes without the strings. The menagerie of structures that can be magnetically manipulated includes a smooth ring that wrinkles up, a long tube that squeezes shut, a sheet that folds itself, and a spider-like "grabber" that can crawl, roll, jump, and snap together fast enough to catch a passing ball. It can even be directed to wrap itself around a small pill and carry it across a table. The researchers fabricated each structure from a new type of 3-D-printable ink that they infused with tiny magnetic particles. They fitted an electromagnet around the nozzle of a 3-D printer, which caused the magnetic particles to swing into a single orientation as the ink was fed through the nozzle.


The Robot Dog That Can Open a Door Is Even More Impressive Than It Looks

Slate

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society. The renowned robot-maker Boston Dynamics released a new, and likely highly produced, video on Monday of its latest robot "dog," the SpotMini. From the looks of it, it's an incredible piece of machinery with remarkably lifelike movements, showing a level of dynamism and coordination between its body and software that I've never seen before, and it certainly left some people at least slightly worried that we're nearing a future in which robots will be able to let themselves out of the lab. In the video, a little robot dog prances over to a door, only to realize it has no hands and can't open it. A few seconds later, a larger Spot robot dog that has an articulated arm with a grabber for a hand where its head should be emerges from around a corner.