Goto

Collaborating Authors

 bartneck


Robots are more likely to be deemed a threat if their 'skin' is darker claims new study

Daily Mail - Science & tech

A new study suggest that the same racial stereotypes applied to people are also applied to their mechanical kin. Researchers from the Human Interface Technology Laboratory in New Zealand say humans perceive robots that resemble humans to have a certain race and may apply stereotypes on the bot depending on the shade of its'skin'. The findings come from what's known as a shooter bias test. In the experiment, participants were shown various images of armed and unarmed subjects and asked to make a split-second reaction test based on the level of'threat.' Robots are more likely to be deemed a threat if their'skin' is darker An affirmative reaction came in the form of participants pressing a button, or in other words, choosing to pull the trigger. What they found was that people were more apt to'shoot' robots with darker tones than lighter ones even when they were posing no threat.


Humans have a hard time 'killing' robots, especially when they beg for their lives

Popular Science

In a recent paper in the journal PLOS ONE, German researchers asked 89 college students to team up with a tiny, bright-eyed robot named Nao to answer questions and complete menial tasks. But, as is typical in experimental psychology, these tasks were a distraction from the real question under investigation: What happens when the humans had to turn the robot off? In 43 cases, the Verge reported earlier this month, "the robot protested, telling participants it was afraid of the dark and even begging: 'No! Please do not switch me off!'" As the researchers predicted, participants struggled to switch the machine--which they had previously worked with as a partner--off. Thirty of the humans took twice as long on average to turn off the robots compared to the group whose robots said nothing at all.


Even black robots are impacted by racism

#artificialintelligence

The researchers collected photos of people of different races and Nao, a humanoid robot, and changed the color of the robot's shell to a variety of human skin tones. Their experimental setup relied on the "shooter bias" procedure, which has participants playing the role of a police officer who has to decide if they should or shouldn't shoot their gun when shown different images. Those photos had a person or Nao in it, either holding a weapon in their hand or some other, benign object. The study subjects saw the picture for only a split second and were asked to act on instinct. The study found that the participants were faster to shoot an armed black human and robot than they were to shoot their white counterparts.


Humans Show Racial Bias Towards Robots of Different Colors: Study

IEEE Spectrum Robotics

The majority of robots are white. Do a Google image search for "robot" and see for yourself: The whiteness is overwhelming. There are some understandable reasons for this; for example, when we asked several different companies why their social home robots were white, the answer was simply because white most conveniently fits in with other home decor. But a new study suggests that the color white can also be a social cue that results in a perception of race, especially if it's presented in an anthropomorphic context, such as being the color of the outer shell of a humanoid robot. In addition, the same issue applies to robots that are black in color, according to the study.


Fears artificial intelligence could change the way people think

#artificialintelligence

Could robots change the way we think? While that might seem the stuff of dark science fiction, New Zealand artificial intelligence (AI) experts say there's real fear that computer algorithms could hijack our language, and ultimately influence our views on products or politics. "I would compare the situation with the subliminal advertising that was outlawed in the 1970s," said Associate Professor Christoph Bartneck, of Canterbury University's Human Interface Technology Laboratory, or HIT Lab. "We are in a danger of repeating the exact same issue with the use of our language." Bartneck has been working in the area with colleague Jurgen Brandstetter and other experts at the New Zealand Institute of Language Brain and Behaviour and Northwestern University in the United States.


Could artificial intelligence brainwash us?

#artificialintelligence

Could robots change the way we think? While that might seem the stuff of dark science fiction, New Zealand artificial intelligence (AI) experts say there's real fear that computer algoritms could hijack our language, and ultimately influence our views on products or politics. "I would compare the situation with the subliminal advertising that was outlawed in the 1970s," said Associate Professor Christoph Bartneck, of Canterbury University's Human Interface Technology Laboratory, or HIT Lab. "We are in a danger of repeated the exact same issue with the use of our language." Bartneck has been working in the area with colleague Jurgen Brandstetter and other experts at the New Zealand Institute of Language Brain and Behaviour and Northwestern University in the US.


Who's Iris Pear? Nuclear physics conference accepts nonsensical 'autocomplete' study

Christian Science Monitor | Science

Next month, Dr. Iris Pear will present her groundbreaking new study at the International Conference on Atomic and Nuclear Physics. Iris Pear – a play on "Siri Apple" – is the invention of Christophe Bartneck, an associate professor of computer science at New Zealand's University of Canterbury. The study in question is completely nonsensical, procedurally generated by iOS's autocomplete function. Why, then, did a conference for "leading academic scientists" select it for presentation? On Thursday, Dr. Bartneck received an invitation to submit research for an upcoming conference on nuclear physics.


Is he Siri-us? Professor writes entire nonsense paper using Apple autocomplete app only for it to ACCEPTED for an academic conference

Daily Mail - Science & tech

Professor Christopher Bartneck never believed his research paper, written by Apple's iOS autocomplete, would be accepted for a nuclear physics conference An academic who jokingly wrote a research paper written entirely by Apple's iOS autocomplete - and was subsequently filled with nonsense - has been accepted to present his findings at a nuclear physics conference. Christopher Bartneck, an associate professor at the University of Canterbury's Human Interface Technology laboratory in New Zealand, was stunned to discover he had been successful in securing a place at the conference, which takes place in America next month. 'I started a sentence with'Atomic' or'Nuclear' and then randomly hit the autocomplete suggestions,' wrote Bartneck in a blog post on Thursday. 'The text really does not make any sense.' Bartneck's mischievous side was fired up after receiving an invitation from the International Conference on Atomic and Nuclear Physics, which will be held in Atlanta in November.


Nuclear physics conference accepts nonsensical 'autocomplete' study

Christian Science Monitor | Science

Next month, Dr. Iris Pear will present her groundbreaking new study at the International Conference on Atomic and Nuclear Physics. Iris Pear – a play on "Siri Apple" – is the invention of Christophe Bartneck, an associate professor of computer science at New Zealand's University of Canterbury. The study in question is completely nonsensical, procedurally generated by iOS's autocomplete function. Why, then, did a conference for "leading academic scientists" select it for presentation? On Thursday, Dr. Bartneck received an invitation to submit research for an upcoming conference on nuclear physics.