New Scientist







AI that cracked ancient secret code could help robot translation

New Scientist

Secret code or foreign language? For machines, it might not matter. Without any prior knowledge, artificial intelligence algorithms have cracked two classic forms of encryption: the Caesar cipher and Vigenère cipher. As translating languages is similar to decoding a cipher, the approach may improve translation software. To break the ciphers, Aidan Gomez and colleagues at the University of Toronto and Google used a type of algorithm called a generative adversarial network. The GAN started with no knowledge of ciphers or language, but by analysing thousands of English …


Facebook is making a chatbot that can fill awkward silences

New Scientist

There are a lot of things that chatbots have yet to master and high on the list is small talk. But researchers at Facebook think the best way to make software prattle away is to give it a personality. The team crowdsourced their chatbot personas from 11,000 online workers on Amazon's Mechanical Turk. Workers were asked to roleplay in pairs and to give statements describing made-up personas, including their likes and dislikes. The crowdworkers' chatter was linked to these description statements and used to train the chatbots. … We corrected Jason Weston's name and the reality status of the workers' personae


Art history AI sees links between hundreds of years of paintings

New Scientist

Machines are getting highbrow. One artificial intelligence has learned how to create new styles of art – now another is teaching itself art history. By analysing thousands of paintings produced over hundreds of years, the AI was able to spot connections between generations of painters that matched accepted theories in the art world. It might even teach us something new. "The machine could be seeing some complex links that we have no idea about," says Marian Mazzone, …


Chinese police use face recognition glasses to catch criminals

New Scientist

For the past two months, cyborg police officers have screened travellers passing through Zhengzhou railway station in China. The officers, wearing smart glasses with built-in face recognition, have caught seven fugitives and 26 fake ID holders already. According to local media, some of the fugitives were wanted for alleged involvement in human trafficking cases. Zhang Xin, at LLVision, the firm that developed the GLXSS Pro smart glasses, says the glasses are very light so the police officers can wear them all day. Feedback so far been positive, she says. …


Face-recognition software is perfect – if you're a white man

New Scientist

Face-recognition software can guess your gender with amazing accuracy – if you are a white man. Joy Buolamwini at the Massachusetts Institute of Technology tested three commercially available face-recognition systems, created by Microsoft, IBM and the Chinese company Megvii. The systems correctly identified the gender of white men 99 per cent of the time. But the error rate rose for people with darker skin, reaching nearly 35 per cent for women. The results will be presented at the Conference on Fairness, Accountability, and Transparency in New York later this month Face-recognition software is already being used in many different situations, including by police to identify suspects in a crowd and to automatically tag photos. This means inaccuracies could have consequences, such as systematically ingraining biases in police stop and searches. Biases in artificial intelligence systems tend to come from biases in the data they are trained on. According to one study, a widely used data set is around 75 per cent male and more than 80 per cent white. Read more: Is tech racist? The fight back against digital discrimination This article appeared in print under the headline "Face recognition's biases on show"