Chinese guests at Marriott International, the world's largest hotel chain, may soon be able to check in with a quick scan of their facial features. The chain will work in a joint venture with Chinese e-commerce giant Alibaba Group to test facial recognition check-ins at two China hotels this month, the firms said on Wednesday, with ambitions for a global rollout later. China is spearheading the use of facial recognition for everything from helping control major live events to ordering fast-food, but also bolstering a growing domestic surveillance system that has raised fears among human rights activists of privacy being invaded. The joint venture said the new technology would help guests jump queues and cut the check-in process to less than a minute, compared to at least three minutes at a normal counter. Chinese guests will need to scan their IDs, take a photo and input contact details on an automated machine, the firms said.
Facial recognition AI could help police to spot'potentially dangerous' criminals before they've even broken the law, according to one expert. Dr Michal Kosinski - who last year invented a controversial AI he claimed could detect your sexuality - said such face-reading technology may one day help CCTV cameras monitor public spaces for people predisposed to violent behaviour. While the concept raises privacy issues, it has the potential to save lives, the Stanford University academic claims. Dr Kosinski is currently working on computer programmes that detect everything from your political beliefs to your IQ by looking at a single photograph. Stanford researcher Dr Michal Kosinski hit headlines last year after publishing research (pictured) suggesting AI can tell whether someone is straight or gay based on photos.
Microsoft has updated it's facial recognition technology in an attempt to make it less'racist'. It follows a study published in March that criticised the technology for being able to more accurately recognise the gender of people with lighter skin tones. The system was found to perform best on males with lighter skin and worst on females with darker skin. The problem largely comes down to the data being used to train the AI system not containing enough images of people with darker skin tones. Experts from the computing firm say their tweaks have significantly reduced these errors, by up to 20 times for people with darker faces.
Florida has stopped testing Amazon's facial recognition program after rights groups raised concerns that the service could be used in ways that could violate civil liberties. Orlando ended a pilot program last week after its contract with Amazon.com Inc to use its Rekognition service expired. 'Partnering with innovative companies to test new technology - while also ensuring we uphold privacy laws and in no way violate the rights of others - is critical to us as we work to further keep our community safe,' the city and the Orlando Police Department said in a joint statement Monday. Orlando was one of several U.S. jurisdictions that Amazon has pitched its service to since unveiling it in late 2016 as a way to detect offensive content and secure public safety.
Amazon is drawing the ire of its shareholders after an investigation found that it has been marketing powerful facial recognition tools to police. Nearly 20 groups of Amazon shareholders delivered a signed letter to CEO Jeff Bezos on Friday, pressuring the company to stop selling the software to law enforcement. The tool, called'Rekognition', was first released in 2016, but has since been selling it on the cheap to several police departments around the country, with Washington County Sheriff's Office in Oregon and the city of Orlando, Florida among its customers. Shareholders, including the Social Equity Group and Northwest Coalition for Responsible Investment, join the American Civil Liberties Union (ACLU) and other privacy advocates in pointing out privacy violations and the dangers of mass surveillance. 'We are concerned the technology would be used to unfairly and disproportionately target and surveil people of color, immigrants, and civil society organizations,' the shareholders write.
Facial recognition technology used by the UK police is making thousands of mistakes - and now there could be legal repercussions. Civil liberties group, Big Brother Watch, has teamed up with Baroness Jenny Jones to ask the government and the Met to stop using the technology. They claim the use of facial recognition has proven to be'dangerously authoritarian', inaccurate and a breach if rights protecting privacy and freedom of expression. If their request is rejected, the group says it will take the case to court in what will be the first legal challenge of its kind. South Wales Police, London's Met and Leicestershire are all trialling automated facial recognition systems in public places to identify wanted criminals.
Body cameras worn by police in the US could soon have in-built facial recognition software, sparking'serious concerns' among civil liberties groups. The controversial technology, branded'categorically unethical', would automatically scan and identify every single person law enforcement interacts with. It is intended to help officers track down suspects more effectively, but experts are worried it could lead to false arrests and suffer from inbuilt racial and other biases. If developed, the equipment could become a regular sight on the streets of cities across the world. The manufacturer behind the move has now brought together a panel of experts to discuss the implications of the'Minority Report' style technology.
A report by Human Rights Watch and the Harvard Law School International Human Rights Clinic calls for humans to remain in control over all weapons systems at a time of rapid technological advances. It says that requiring humans to remain in control of critical functions during combat, including the selection of targets, saves lives and ensures that fighters comply with international law. 'Machines have long served as instruments of war, but historically humans have directed how they are used,' said Bonnie Docherty, senior arms division researcher at Human Rights Watch, in a statement. 'Now there is a real threat that humans would relinquish their control and delegate life-and-death decisions to machines.' Some have argued in favour of robots on the battlefield, saying their use could save lives.