This AI Model Can Intuit How the Physical World Works
As the engineers who build self-driving cars know, it can be hard to get an AI system to reliably make sense of what it sees. Most systems designed to "understand" videos in order to either classify their content ("a person playing tennis," for example) or identify the contours of an object--say, a car up ahead--work in what's called "pixel space." The model essentially treats every pixel in a video as equal in importance. But these pixel-space models come with limitations. Imagine trying to make sense of a suburban street. If the scene has cars, traffic lights and trees, the model might focus too much on irrelevant details such as the motion of the leaves. It might miss the color of the traffic light, or the positions of nearby cars. "When you go to images or video, you don't want to work in [pixel] space because there are too many details you don't want to model," said Randall Balestriero, a computer scientist at Brown University. Yann LeCun, a computer scientist at New York University and the director of AI research at Meta, created JEPA, a predecessor to V-JEPA that works on still images, in 2022.
Dec-7-2025
- Country:
- Europe
- Czechia (0.04)
- Netherlands > North Holland
- Amsterdam (0.04)
- Slovakia (0.04)
- North America > United States
- California (0.04)
- New York (0.24)
- Europe
- Industry:
- Transportation > Ground > Road (0.89)
- Technology: