Machines that see the world more like humans do
Computer vision systems sometimes make inferences about a scene that fly in the face of common sense. For example, if a robot were processing a scene of a dinner table, it might completely ignore a bowl that is visible to any human observer, estimate that a plate is floating above the table, or misperceive a fork to be penetrating a bowl rather than leaning against it. Move that computer vision system to a self-driving car and the stakes become much higher --for example, such systems have failed to detect emergency vehicles and pedestrians crossing the street. To overcome these errors, MIT researchers have developed a framework that helps machines see the world more like humans do. Their new artificial intelligence system for analyzing scenes learns to perceive real-world objects from just a few images, and perceives scenes in terms of these learned objects. The researchers built the framework using probabilistic programming, an AI approach that enables the system to cross-check detected objects against input data, to see if the images recorded from a camera are a likely match to any candidate scene.
Dec-9-2021, 01:13:31 GMT
- Country:
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- Industry:
- Transportation > Ground > Road (0.56)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning > Neural Networks
- Deep Learning (0.55)
- Representation & Reasoning (0.98)
- Robots (0.92)
- Vision (0.92)
- Machine Learning > Neural Networks
- Information Technology > Artificial Intelligence