You might think sheep only have a woolly sense of what humans look like. But now a study has found that after training, sheep could pick out the faces of four human celebrities presented next to faces they had not seen before. You might think sheep only have a woolly sense of what humans look like. Researchers trained eight sheep to recognise the front-on-faces of four celebrities from photographs. They attempted to find out whether sheep could recognise the four celebrity faces when presented in different perspectives – an ability only previously studied in humans.
Babies have the capacity to learn, remember and use contextual cues in a scene to find things of interest, such as faces, when they are just six months old, researchers have found. Researchers admit they were shocked by the discovery, and said it could help stop signs of developmental issues such as autism far earlier. 'It was pretty surprising to find that 6-month-olds were capable of this memory-guided attention,' said Kristen Tummeltshammer of Brown, who led the study. 'We didn't expect them to be so successful so young.' In the experiment, which is published in Developmental Science, babies showed a steady improvement in finding faces in repeated scenes, but didn't get any quicker or more accurate in finding faces in new scenes.
People with higher cognitive abilities are often better able to spot patterns in the world around them, allowing them to excel in a wide range of tasks, from learning languages to recognizing faces. But, in some situations, even being intelligent has its drawbacks. A new study has found that these people are more likely to stereotype others based on the patterns they detect, potentially leading to negative consequences as they perpetuate social biases. A new study found that people with higher cognitive abilities are more likely to stereotype others. In the study, the researchers manipulated image-description pairings so that the faces with particular features were linked to negative stereotypes.
MIT researchers believe they've figured out a way to keep facial recognition software from being biased. To do this, they developed an algorithm that knows to scan for faces, but also evaluates the training data supplied to it. The algorithm scans for biases in the training data and eliminates any that it perceives, resulting in a more balanced dataset. MIT researchers believe they've figured out a way to keep facial recognition software from being biased. They developed an algorithm that's capable of balancing training data'We've learned in recent years that AI systems can be unfair, which is dangerous when they're increasingly being used to do everything from predict crime to determine what news we consume,' MIT's Computer Science & Artificial Intelligence Laboratory said in a statement.