Making Artificial Intelligence to see the world that humans do

#artificialintelligence

A Northwestern University team developed a new computational model that performs at human levels on a standard intelligence test. This work is an important step toward making artificial intelligence systems that see and understand the world as humans do. "The model performs in the 75th percentile for American adults, making it better than average," said Northwestern Engineering's Ken Forbus. "The problems that are hard for people are also hard for the model, providing additional evidence that its operation is capturing some important properties of human cognition."The The platform has the ability to solve visual problems and understand sketches in order to give immediate, interactive feedback.


Making AI systems that see the world as humans do

#artificialintelligence

A Northwestern University team developed a new computational model that performs at human levels on a standard intelligence test. This work is an important step toward making artificial intelligence systems that see and understand the world as humans do. "The model performs in the 75th percentile for American adults, making it better than average," said Northwestern Engineering's Ken Forbus. "The problems that are hard for people are also hard for the model, providing additional evidence that its operation is capturing some important properties of human cognition." The new computational model is built on CogSketch, an artificial intelligence platform previously developed in Forbus' laboratory.


Making A.I. Systems that See the World as Humans Do

#artificialintelligence

A Northwestern University team developed a new computational model that performs at human levels on a standard intelligence test. This work is an important step toward making artificial intelligence systems that see and understand the world as humans do. "The model performs in the 75th percentile for American adults, making it better than average," said Northwestern Engineering's Ken Forbus. "The problems that are hard for people are also hard for the model, providing additional evidence that its operation is capturing some important properties of human cognition." The new computational model is built on CogSketch, an artificial intelligence platform previously developed in Forbus' laboratory.


AI scores higher than the average person on standard test

Daily Mail - Science & tech

Artificial intelligence can now outperform humans on a standard intelligence test. A new computational model scores within the 75th percentile, better than the average person, on a test known as Raven's Progressive Matrices. Researchers say this demonstrates that it can take on abstract visual reasoning tasks, and is a major step toward AI that can see and understand the world the way we do. Using Raven's Progressive Matrices, a nonverbal standardized test that measures abstract reasoning, the team found that their model is not only on par with humans, but performs better than many. In this example, participants choose which shape should come next in the sequence.


New Artificial Intelligence robots to mimic human cognition

#artificialintelligence

A team of artificial intelligence researchers from Northwestern University have built a robot on CogSketch model that will mimic the understanding level of common human beings. This computational model of analogy is based on the structure mapping theory of Northwestern psychology professor Dedre Gentner and the same artificial intelligence platform was previously developed in Forbus' Laboratory. According to Ken Forbus this model has the ability to understand the world as adult Americans do with an accuracy of 75 percentages. He further added that the things those are difficult for humans to understand are also difficult to recognise by these robots; best proof that it is mimicking human cognition. However; it can solve complex visual problems citing as one of the hallmarks of human intelligence.