The "whiteness" of artificial intelligence (AI) removes people of colour from the way humanity thinks about its technology-enhanced future, researchers argue. University of Cambridge experts suggest current portrayals and stereotypes about AI risk creating a "racially homogenous" workforce of aspiring technologists, creating machines with bias baked into their algorithms. The scientists say cultural depictions of AI as white need to be challenged, as they do not offer a "post-racial" future but rather one from which people of colour are simply erased. In their paper, "The Whiteness of AI" published in the journal, Philosophy and Technology, Leverhulme CFI Executive Director, Stephen Cave and Dr Kanta Dihal offer insights into the ways in which portrayals of AI stem from, and perpetuate, racial inequalities. Cave and Dihal cite research showing that people perceive race in AI, not only in human-like robots, but also in abstracted and disembodied AI.
The recruitment process has come a long way since the days of paper CVs. Thanks to a decade-long digital transformation, online job sites, virtual portfolios, and even Skype interviews are now staples in global talent acquisition, but could artificial intelligence (AI) elevate the hiring landscape and take the recruitment process one step further? AI has become somewhat of a buzzword lately. When we think of AI, we often think of human-like robots which can mimic our behaviour (and potentially take over the world someday). However, although artificially intelligent robots do exist, the term AI typically applies to any self-learning machine that can analyse data and provide insights that make us smarter, more efficient and better at the things we do every day.
Japanese tech company SoftBank has created a version of its Pepper robot that can detect whether office workers are wearing a mask. The 47-inch-high robot with human-like features is already in operation in some countries welcoming visitors to shops, exhibitions and other public spaces. But the upgraded version is designed to stand at the entrance to offices, conferences, airports and other public spaces, to provide a gentle reminder to people to wear masks. Pepper uses enhanced AI face detection to scan a person's face and if it detects the lower half is uncovered, it displays a red circle around on the screen on its chest and says: 'I see one of you is not wearing a mask.' If it sees that the visitor then puts on a mask, the circle turns green and the robot follows up with: 'Thank you for having put on your mask.' SoftBank has developed and released a mask detection feature for its robot Pepper, which it first debuted in 2014.
Mori's original hypothesis states that as the appearance of a robot is made more human, some observers' emotional response to the robot becomes increasingly positive and empathetic, until it reaches a point beyond which the response quickly becomes strong revulsion. However, as the robot's appearance continues to become less distinguishable from a human being, the emotional response becomes positive once again and approaches human-to-human empathy levels. This area of repulsive response aroused by a robot with appearance and motion between a "barely human" and "fully human" entity is the uncanny valley. The name captures the idea that an almost human-looking robot seems overly "strange" to some human beings, produces a feeling of uncanniness, and thus fails to evoke the empathic response required for productive human–robot interaction. The last sentence is the crucial one: a robot "fails to evoke the empathic response required for productive human-robot interaction."
In Tokyo, an eerily human-like robot called Aiko Chihira wears a colourful kimono and greets shoppers at the entrance of a glossy department store. At the nearby Uniqlo warehouse, AI machines have now replaced 90 percent of human staff and work day and night. When we thought of mass job losses in the fashion industry, we pictured a robotic future like this one, which – until March – felt years away from a British reality. However, the pandemic has accelerated a move towards automation that could otherwise have taken decade, with the British Fashion Council suggesting a quarter of a million industry jobs might be lost. Debenhams axed 2,500 on Tuesday after the 4,000 cuts the group made in April, while Burberry recently announced plans to cut 500 jobs worldwide, and M&S 950.
Rapid technological advancements are giving rise to a new generation of robots that have the ability to perform diverse tasks in open spaces and work with and alongside people. While already a mainstay within industrial and manufacturing sectors, next-generation robots are being utilized in non-traditional settings, such as grocery stores, hotels, airports, banks, shopping malls and public spaces, including sidewalks and parks. UL has issued UL 3300, the Outline of Investigation (OOI) for Service, Communication, Information, Education and Entertainment (SCIEE) Robots. This is UL's first consumer and commercial robot certification document that addresses human-robot interaction safety concerns. Since SCIEE (pronounced sky) robots typically operate near humans, the outline places a priority on the safe operation of robots in a variety of environments where people are present.
Swapping bodies with another person would have a profound effect on the subject's behaviour and even their personality, a new study has revealed. Scientists at the Karolinska Institutet in Sweden discovered a way to allow people to experience the effect of swapping bodies, through a perceptual illusion, in order to understand the relationship between a person's psychological and physical sense of self. They found that when pairs of friends "switched bodies", each friend's personality became more like the other. "Body swapping is not a domain reserved for science fiction anymore," said Pawel Tacikowski, a postdoctoral researcher at the institute and lead author of the study. In order to create the illusion that the study's subjects had switched bodies, Dr Tacikowski and his team fitted them with virtual reality goggles showing live feeds of the other person's body from a first-person perspective.
The US plans to invest $1 billion (£760 million) in quantum computing and artificial intelligence research, the White House has announced. The initiative will fund 12 hubs around the country and help the US in its bid to compete with China and Europe in two of the most promising next-generation technologies. US Chief Technology Officer Michael Kratsios described the investment as "unprecedented" and a "defining achievement" of the Trump administration. "Built upon the uniquely American free market approach to technological advancement, these institutes will be world-class hubs for accelerating American innovation and building the 21st century American workforce," he said in a statement. Governments around the world are investing heavily in the development of AI and quantum computers, as well as technology giants like Alphabet and Alibaba.
An AI pilot has defeated a US Air Force pilot in a virtual F-16 dogfight in a "coming of age" moment for artificial intelligence. The US military's AlphaDogfight Trials was organised by the Defense Advanced Research Projects Agency (Darpa) - a secretive branch of the US Department of Defense responsible for the development of futuristic technologies. It sought to demonstrate the "feasibility of developing effective, intelligent autonomous agents capable of defeating adversary aircraft in a dogfight." The winning AI pilot, developed by Heron Systems, defeated other AI adversaries before going on to beat a human pilot wearing a VR helmet by a score of 5 - 0 in the final. "We've gotten an opportunity to watch AI come of age [against] a very credible adversary in the human pilot," said Col. Dan Javorsek, program manager in Darpa's Strategic Technology Office.