serré
- North America > United States > Rhode Island > Providence County > Providence (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- (2 more...)
- North America > United States > Rhode Island > Providence County > Providence (0.04)
- Europe > France > Occitanie > Haute-Garonne > Toulouse (0.04)
Fixing the problems of deep neural networks will require better training data and learning algorithms
Over the past decade, vision scientists have turned to deep neural networks (DNNs) to model biological vision. The popularity of DNNs comes from their ability to rival human performance on visual tasks [1] and the seemingly concomitant correspondence of their hidden units with biological vision [2]. Bowers and colleagues [3] marshal evidence from psychology and neuroscience to argue that while DNNs and biological systems may achieve similar accuracy on visual benchmarks, they often do so by relying on qualitatively different visual features and strategies [4-6]. Based on these findings, Bowers and colleagues call for a re-evaluation of what DNNs can tell us about biological vision and suggest dramatic adjustments going forward, potentially even moving on from DNNs altogether. Are DNNs poorly suited to model biological vision?
- Europe > France > Occitanie > Haute-Garonne > Toulouse (0.05)
- North America > United States > Rhode Island > Providence County > Providence (0.05)
- North America > Mexico > Puebla (0.04)
Researchers develop a computer that's fooled by optical illusions
Say you're staring at the image of a small circle in the center of a larger circle: The larger one looks green, but the smaller one appears gray. Except your friend looks at the same image and sees another green circle. So is it green or gray? It can be maddening and fun to try to decipher what is real and what is not. In this instance, your brain is processing a type of optical illusion, a phenomenon where your visual perception is shaped by the surrounding context of what you are looking at.
- North America > United States > New York (0.05)
- North America > United States > Massachusetts (0.05)
- Health & Medicine > Therapeutic Area (0.34)
- Transportation > Passenger (0.30)
- Transportation > Ground > Road (0.30)
- (3 more...)
Why computers are so bad at comparing objects - Futurity
You are free to share this article under the Attribution 4.0 International license. New research sheds light on why computers are so bad at a class of tasks that even young children have no problem with: determining whether two objects in an image are the same or different. "There's a lot of excitement about what computer vision has been able to achieve…" Computer vision algorithms have come a long way in the past decade. They've been shown to be as good or better than people at tasks like categorizing dog or cat breeds, and they have the remarkable ability to identify specific faces out of a sea of millions. In a paper they presented last week at the annual meeting of the Cognitive Science Society, the team examines why computer vision algorithms fail at comparison tasks and suggests avenues toward smarter systems.
A Computer With a Great Eye Is About to Transform Botany
My dad is a wildlife biologist, and during road trips we took when I was growing up he spent a lot of time talking about the grasses and trees along the highway. It was a game he played, trying to correctly identify the passing greenery from the driver's seat of a moving car. As a carsick-prone kid wedged into the back seat of a Ford F150, I found this supremely lame. As an adult--specifically, one who just spoke with a paleobotanist--I now know something about my father's roadtripping habit: Identifying leaves isn't easy. "I've looked at tens of thousands of living and fossil leaves," says that paleobotanist, Peter Wilf of Penn State's College of Earth and Mineral Sciences.
Computer learns to identify leaves faster than a botanist - Futurity
Posted by A'ndrea Elyse Messer-Penn State on March 8, 2016 You are free to share this article under the Attribution 4.0 International license. Identifying an isolated leaf, especially if preserved as a fossil, can be a painstaking process for botanists. A new computer program that learns to categorize leaves into large evolutionary categories could help. Researchers "trained" a machine-learning algorithm to identify leaves based on a set of nearly 7,600 digital images of leaves that had been chemically treated to emphasize their shape and venation. The software discerned relevant patterns so well from that set of examples that it went on to identify the family of novel leaf images with greater than 70 percent accuracy (a rate 13 times better than chance) and the order with about 60 percent accuracy.