shape perception
La veille de la cybersécurité
Deep convolutional neural networks (DCNNs) do not view things in the same way that humans do (through configural shape perception), which might be harmful in real-world AI applications. This is according to Professor James Elder, co-author of a York University study recently published in the journal iScience. The study, which conducted by Elder, who holds the York Research Chair in Human and Computer Vision and is Co-Director of York's Centre for AI & Society, and Nicholas Baker, an assistant psychology professor at Loyola College in Chicago and a former VISTA postdoctoral fellow at York, finds that deep learning models fail to capture the configural nature of human shape perception. In order to investigate how the human brain and DCNNs perceive holistic, configural object properties, the research used novel visual stimuli known as "Frankensteins." "Frankensteins are simply objects that have been taken apart and put back together the wrong way around," says Elder. "As a result, they have all the right local features, but in the wrong places."
AI Use Potentially Dangerous "Shortcuts" To Solve Complex Recognition Tasks
The researchers revealed that deep convolutional neural networks were insensitive to configural object properties. Deep convolutional neural networks (DCNNs) do not view things in the same way that humans do (through configural shape perception), which might be harmful in real-world AI applications, according to Professor James Elder, co-author of a York University study recently published in the journal iScience. The study, which conducted by Elder, who holds the York Research Chair in Human and Computer Vision and is Co-Director of York's Centre for AI & Society, and Nicholas Baker, an assistant psychology professor at Loyola College in Chicago and a former VISTA postdoctoral fellow at York, finds that deep learning models fail to capture the configural nature of human shape perception. In order to investigate how the human brain and DCNNs perceive holistic, configural object properties, the research used novel visual stimuli known as "Frankensteins." "Frankensteins are simply objects that have been taken apart and put back together the wrong way around," says Elder. "As a result, they have all the right local features, but in the wrong places."
- North America > United States > Illinois > Cook County > Chicago (0.26)
- North America > Canada (0.06)
Even smartest AI can't match human eye - Gadget
A common artificial intelligence model known as deep convolutional neural networks (DCNNs) does not see objects the way humans do – and that could be dangerous in real-world AI applications. That is the conclusion of Professor James Elder, co-author of a York University study published recently, which finds that AI cannot use something called "configural shape perception", which is standard in human perception for recognising shapes. Published in the Cell Press journal iScience, the paper Deep learning models fail to capture the configural nature of human shape perception is a collaborative study by Elder, who holds the York research chair in human and computer vision and is co-director of York's Centre for AI & Society, co-authored with assistant psychology professor Nicholas Baker at Loyola College in Chicago, a former postdoctoral fellow at York. The study employed novel visual stimuli called "Frankensteins" to explore how the human brain and DCNNs process holistic, configural object properties. "Frankensteins are simply objects that have been taken apart and put back together the wrong way around," says Elder. "As a result, they have all the right local features, but in the wrong places."
- North America > United States > Illinois > Cook County > Chicago (0.26)
- North America > Costa Rica (0.06)
- North America > Canada > Ontario > Toronto (0.06)
- Asia > India (0.06)
Study highlights how AI models take potentially dangerous 'shortcuts' in solving complex recognition tasks
Deep convolutional neural networks (DCNNs) don't see objects the way humans do--using configural shape perception--and that could be dangerous in real-world AI applications, says Professor James Elder, co-author of a York University study published today. Published in the Cell Press journal iScience, Deep learning models fail to capture the configural nature of human shape perception is a collaborative study by Elder, who holds the York Research Chair in Human and Computer Vision and is Co-Director of York's Centre for AI & Society, and Assistant Psychology Professor Nicholas Baker at Loyola College in Chicago, a former VISTA postdoctoral fellow at York. The study employed novel visual stimuli called "Frankensteins" to explore how the human brain and DCNNs process holistic, configural object properties. "Frankensteins are simply objects that have been taken apart and put back together the wrong way around," says Elder. "As a result, they have all the right local features, but in the wrong places." The investigators found that while the human visual system is confused by Frankensteins, DCNNs are not--revealing an insensitivity to configural object properties.
How Does Understanding Of AI Shape Perceptions Of XAI?
One of the biggest challenges of machine learning and artificial intelligence is their inability to explain their decision to the users. This black box in AI renders the system largely impenetrable, making it difficult for scientists and researchers to understand why a certain system is behaving the way it is. In recent years, a new branch of explainable AI (XAI) has emerged, which the researchers are actively pursuing to establish user-friendly AI. That said, how AI explanations are perceived is highly dependent on a person's background in AI. A new study named "The Who in Explainable AI: How AI Background Shapes Perceptions of AI Explanations", argues that AI background influences each group's interpretations and that these differences exist through the lens of appropriation and cognitive heuristics.
Even experts are too quick to rely on AI explanations, study finds
The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. As AI systems increasingly inform decision-making in health care, finance, law, and criminal justice, they need to provide justifications for their behavior that humans can understand. The field of "explainable AI" has gained momentum as regulators turn a critical eye toward black-box AI systems -- and their creators. But how a person's background can shape perceptions of AI explanations is a question that remains underexplored. A new study coauthored by researchers at Cornell University, IBM, and the Georgia Institute of Technology aims to shed light on the intersection of interpretability and explainable AI.
Conceptual Ternary Diagrams for Shape Perception: A Preliminary Step
Rudduck, Sylvan Grenfell (University of Technology, Sydney) | Williams, Mary-Anne (University of Technology, Sydney)
This work-in-progress provides a preliminary cognitive investigation of how the external visualization of the Ternary diagram (TD) might be used as an underlying model for exploring the representation of simple 3D cuboids according to the theory of Conceptual Spaces. Gärdenfors introduced geometrical entities, known as conceptual spaces, for modeling concepts. He considered multidimensional spaces equipped with a range of similarity measures (such as metrics) and guided by criteria and mechanisms as a geometrical model for concept formation and management. Our work is inspired by the conceptual spaces approach and takes ternary diagrams as its underlying conceptual model. The main motivation for our work is twofold. First, Ternary Diagrams are powerful conceptual representations that have a solid historical and mathematical foundation. Second, the notion of overlaying an Information- Entropy function on a ternary diagram can lead to new insights into applications of reasoning about shape and other cognitive processes.
- North America > United States (0.68)
- Oceania > Australia (0.14)
- Europe > United Kingdom > England (0.14)