Collaborating Authors


Rise of the woebots: why are robots always so sad?

The Guardian

Starting last fall, Blake Lemoine began asking a computer about its feelings. An engineer for Google's Responsible AI group, Lemoine was tasked with testing one of the company's AI systems, the Language Model for Dialogue Applications, or LaMDA, to make sure it didn't start spitting out hate speech. But as Lemoine spent time with the program, their conversations turned to questions about religion, emotion, and the program's understanding of its own existence. Lemoine: Are there experiences you have that you can't find a close word for? Sometimes I experience new feelings that I cannot explain perfectly in your language.

Faces in objects are more likely to be perceived as young and male, study finds

Daily Mail - Science & tech

From angry handbags to washing machines in distress, humans see faces in all sorts of inanimate objects – a peculiar phenomenon known as'face pareidolia'. Now, researchers in Maryland have found that these faces are more likely to perceived as young and male than old and female. The academics tested nearly 4,000 volunteers with photos to stimulate pareidolia, including images of an'alarmed' teapot, a'relaxed' potato and a'disgusted' green apple on a branch. Participants perceived illusory faces as having a specific emotional expression, age and gender, but they were mostly perceived as young and male by both men and women. Researchers weren't sure why this was, although it's possible humans are more prone to seeing men because we were more exposed to male faces during our earliest stages of development.

Pareidolia of AI


It's here to protect us. A well-known phenomenon, when we see faces where aren't any. We see faces on Mars, we see Jesus in the toast, we see it everywhere. Well, it does our brain, which is trained on biometrical recognition: eyes, nose, mouth when everything is in the right place, that's a face. It originated from our past -- as we were strolling around the woods hunting mammoths. It was better to confound a bush with a tiger once more than to ignore an actual predator between the trees.

Facial Recognize in Python


One of the most important concepts in facial analysis using images, is to define our region of interest (ROI), we must define in our image a specific part where we will filter or perform some operation. For example, if we need to filter the license plate of a car, our ROI is only on the license plate. The street, the body of the car and anything else that is present in the image is just a supporting part in this operation. In our example, we will use the opencv library, which already has supported to partition our image and help us identify our ROI. In our project we will use the ready-made classifier known as: Haar cascade classifier. This specific classifier will always work with gray images.

Talking to your pets and car is a sign of intelligence

Daily Mail - Science & tech

While it's common for children to talk to their stuffed toys or animals, adults tend to outgrow this and are seen as odd if they do. But there's a scientific reason why humans tend to talk to animals or objects, and it's linked to social intelligence. One of the reasons we might anthropomorphize - give human form or attributes to an animal, plant, material or object, is because of our unique ability to recognize and find faces everywhere. Researchers say there is a scientific reason why humans tend to talk to animals or objects, and it's linked to social intelligence Dr Nicholas Epley, a professor of behavioral science at the University of Chicago and an anthropomorphism expert, told Quartz: 'Historically, anthropomorphizing has been treated as a sign of childishness or stupidity, but it's actually a natural byproduct of the tendency that makes humans uniquely smart on this planet'. He said whether or not we realize it, humans anthropomorphize objects and events all the time.

Artificial Intelligence Is Already Weirdly Inhuman - Issue 27: Dark Matter - Nautilus

AITopics Original Links

Nineteen stories up in a Brooklyn office tower, the view from Manuela Veloso's office--azure skies, New York Harbor, the Statue of Liberty--is exhilarating. But right now we only have eyes for the nondescript windows below us in the tower across the street. In their panes, we can see chairs, desks, lamps, and papers. They don't look quite right, though, because they aren't really there. The genuine objects are in a building on our side of the street--likely the one where we're standing. A bright afternoon sun has lit them up, briefly turning the facing windows into mirrors. We see office bric-a-brac that looks ghostly and luminous, floating free of gravity. Veloso, a professor of computer science and robotics at Carnegie Mellon University, and I have been talking about what machines perceive and how they "think"--a subject not nearly as straightforward as I had expected.

Why We Hear Voices in Random Noise - Facts So Romantic


You may have once seen a giant face in the clouds. Perhaps it took you aback, amused you, or maybe it prompted an "uncanny valley" kind of sensation--realness, but with a lingering unease. It's thought that a similar experience was shared by an early hominid approximately 3 million years ago. Researchers say a rock that bore resemblance to a face was carried, over some four kilometers from where it was probably found, to an Australopithecine home. Known as the Makapansgat pebble, it was found in 1925 in a South-African cave, in what may well have been a camp or dwelling.